Machine Learning

Machine learning technologies have seen tremendous progress over the past decade, owing to the availability of massive and diverse data, rapid growth in computing and storage power, and novel techniques such as deep learning and sequence-to-sequence models. ML algorithms for several central cognitive tasks, including image and speech recognition, have now surpassed human performance. This enables new applications and levels of automation that seemed out of reach only a few years ago. For example, fully autonomous self-driving cars in the real world are now technically feasible; smart assistants integrate speech recognition and synthesis, natural language understanding, and reasoning, into full-blown dialog systems; AI systems have beaten humans at Jeopardy, Go, and several other tasks.

Yet taking such functions out of human hands raises a number of concerns and fears, which if not addressed could easily erode our trust in ML technology.
First, ML algorithms can exhibit biases and generate discriminatory decisions, inherited from training data. There is currently a strong research effort under way to define notions of fairness and methods to ascertain that ML algorithms conform to these notions. More broadly, the issue of how to teach machines to act ethically, e.g., self-driving cars needing to make split-second decisions about an impending accident, is critical.

Second, in many scenarios, ML algorithms and human decision-makers have to work in concert. This is true, for example, in medical diagnostics, where we are not (yet) ready to make completely automated decisions, but doctors want to rely on ML to augment their own understanding and improve their decisions. A major challenge is to explain predictions by ML to humans, especially with the advent of “black-box” techniques like deep learning. How to convince a sceptical human operator that a prediction is plausible and accurate? We need techniques for interpretable ML algorithms with the ability to mimic the way a doctor explains a diagnostic to another doctor.

Third, while ML algorithms manage to outperform human subjects in various cognitive tasks, many of these algorithms still lack robustness in adversarial settings: for example, small adversarial modifications of images (a few pixels) have been shown to lead to misclassification, while human performance would be unaffected. This lack of robustness is a vulnerability that may be exploited to attack ML systems, and consequently undermine trust in their decisions. Additionally, ML models (e.g., for medical applications) are often trained on sensitive data one would ideally not reveal to third parties, thus creating the need for privacy-sensitive ML algorithms that can learn to make predictions without access to raw sensitive data.

The public acceptance of a much greater level of automation in many areas of business and life, of ML algorithms making decisions affecting people’s health, careers, relationships, etc., requires a much stronger level of trust. ML technology has to evolve to be fair, accountable, and transparent (FAT). Today’s research agenda does not sufficiently reflect these requirements, and remains strongly focused on pushing the performance of tasks such as outlined above. C4DT will drive a research program that focuses explicitly on trust as a goal of next-generation ML frameworks.

Conversely, ML technology is itself an indispensable layer in the architecture of trust of any sufficiently complex system. Despite decades of research in security technologies, from cryptography to verification and to blockchains, human behaviour is often the weakest link and the culprit for successful attacks. Social engineering has played at least some role in almost all major recent attacks. AI has the potential to bring higher-level reasoning and adaptively learning behavioural patterns to bear on distributed systems of trust. The long term ambition is to be able to identify and counter attacks that have not previously been identified and explicitly modelled.

In summary, ML and AI more broadly are transformative technologies that will reshape our economy and our lives. Trust in these systems is crucial to integrate them without causing mistrust and public resistance and a potential backlash, and they need to reflect and encode the values and principles of our societies. At the same time, there is an opportunity that AI technologies become central in fostering trust in complex digital infrastructures, by detecting and preventing attacks and by proactively analysing complex systems and identifying weaknesses.