Why we should all be interested in artificial intelligence

Friday 03 Nov 2017

Filed under


How is AI likely to develop over the next 5, 10 and 20 years? How can data be managed and safeguarded to ensure it contributes to the public good and a well-functioning economy?

Turing researchers recently made a written submission in which they answered key questions like these from the House of Lords inquiry into artificial intelligence on the implications and future of AI in the UK. Here we look at some of the highlights.

The Sophia robot has sparked debate about the applications and future of artificial intelligence

Why is AI important?

Algorithms, according to Turing Visiting Researcher Simon DeDeo, “have the potential to transform the material, social, and political landscape… and alter the basic rhythms of human life in a fashion last seen at the beginning of the Industrial Revolution.”

A key example of this transformation effect is health, as raised by Turing Fellow Maria Liakata who discussed the way it can combine different data sources and transform the way diseases are diagnosed, monitored and treated.

What is the future of AI?

Turing Fellow David Barber was called in front of the Committee to give oral evidence, and said that: “where we are right now is what we call perceptual AI. If somebody speaks, the machine can transcribe into words what you are saying, but the machine does not understand what you are saying. It does not understand who you are or the relationship of objects in this environment. The bigger fruit out there is the reasoning AI; really understanding what these objects are, being able to query this machine and get sensible answers back. That is the biggest and most exciting challenge that all the tech giants are currently desperately seeking to solve. For whoever solves that, the world is their oyster.”

What are the ethical implications of AI?

Turing Fellow Adrian Weller identified six crucial issues in determining the trustworthiness of AI systems: fairness, transparency, privacy, reliability, security, and value alignment. Where and in what combination these should be applied depends on the context in which they are being considered. For example, he wrote that “the use of personal data to decide [your health insurance rate] worries many people, whereas a similar use to decide what to charge for car drivers’ insurance seems reasonable to many.”

Turing Fellow Ricardo Silva added: “There is no such a thing as a complete dataset: data will contain biases and gaps that are absorbed and filled up by methods having varying degrees of transparency. The less well understood the data is as a means to achieve a particular goal, the more important transparency will be. One should not decouple transparency from data quality assessment.”

What needs to happen now?

Speaking at Parliament last week, Turing Fellow David Barber highlighted the need for a significant increase in investment in AI research.

A common theme amongst the Turing researchers was a concern about the concentration of top talent in large companies, and the privileged access these companies have to researchers in universities and to individuals’ data. For the latter, Visiting Researcher Simon DeDeo proposed the creation of “a data-sharing “patent” system, or term-limited monopoly comparable to the patent system for inventors.

Government also needs to address public perception around AI. Research Fellow Nathanaël Fijalkow suggested increasing efforts to make AI technology usable through greater research into privacy, security, reliability and transparency. In addition, it could set standards, “helping and encouraging companies to use artificial intelligence in a safe and responsible manner.”

Some might be tempted to regulate AI, however Turing Fellow Brad Love warned that “specific regulation could reduce innovation and competitiveness for UK industry… existing laws and testing standards regulating [AI] products should prove satisfactory. Innovation will likely take place where government is supportive and adopts a measured approach, as has proved to be the case with autonomous vehicles.”

About the author: Helena Quinn is Policy Officer at The Alan Turing Institute.


Read more of the Turing submission and oral evidence from Turing Fellow David Barber. And for more information about how the Turing is working in AI, check out our projects in using AI to detect cancer cells (with the University of Warwick), combating fraud and money laundering (with Accenture) and providing personalised medicine for sufferers of cystic fibrosis (with the Cystic Fibrosis Trust).