Children’s manifesto expresses hope for better education on AI ahead of Paris AI Action Summit

Children and young people want to see better education on AI to improve understanding and maximise its use, according to a children’s manifesto published today for the Artificial Intelligence Action Summit in Paris.

The manifesto was created following the world’s first children’s AI summit organised by the Alan Turing Institute in partnership with Queen Mary University of London which brought together around 150 children to explore how AI impacts them and how they can shape its future.

Children are among the most impacted by advances in AI but they are the most underrepresented group in policy-making decisions around its design and discussions about how it is regulated.

The manifesto, which draws on ideas and views from children who attended the summit, shows that children feel their lives are impacted by AI but they are never asked what they think about it.

Their thoughts and concerns about AI are centred on three key areas: education, the environment, and health, safety and wellbeing.

They would like AI to be used to support children’s education, including for personalised learning, helping children to learn in ways that are right for them.

They are conscious that too much reliance on AI could affect children’s creativity and problem-solving skills. But they believe it should be developed to support children with learning difficulties, autism, ADHD or dyslexia by giving extra support with their learning, and to provide translation tools to help children with different languages.

Evie Marie, aged 9, said it could be “used to customize learning experiences and translate languages. AI tools can adapt lessons to fit a child’s specific needs, helping them learn at their own speed.”

Young people would like to see AI used to advance scientific research, particularly in ways that will help protect the environment. But they are also concerned about its negative impact on the environment. They would like world leaders to address this by ensuring the technology is powered by clean energy sources.

Ishrit, aged 13, said: “Although it is not always visible, AI actually produces a significant amount of carbon emissions. Scientists have estimated that for every image that a generative AI model produces, the carbon emissions created is equivalent to fully charging a mobile phone.”

Children say that AI should be used to help keep children safe online, for example making sure children don’t see inappropriate content. AI could be used to “monitor online activities and social media usage to detect signs of cyberbullying, anxiety, or depression,” said Chekwube, aged 16.

They believe that AI should also be used to keep people safe offline, for example it could be used in tools to help children with road safety. And they think it should be developed to make better medicine and healthcare as well as to support children and young people with their mental health.

Overall, they want adults to think about the experiences and needs of children around the world and put measures in place to make sure AI is safe for children, including restrictions on social media. They want to see new laws in place to make sure that AI is developed and used ethically.

“We don't want AI to make the world a place where only a few people have everything, and everyone else has less. I hope you can make sure that AI is used to help everyone to make a safe, kind, and fair world.” (Alexander, Ashvika, Avar and Mustafa, all age 11)

Dr Mhairi Aitken, Senior Ethics Fellow at the Alan Turing Institute, is presenting the manifesto at the Paris AI Action Summit today alongside a young person who attended the Children’s AI Summit. She said: “Children and young people continue to demonstrate a strong understanding of how AI could be used for good, as well as the risks it can pose to their safety and wellbeing. But until today, their voices have not been part of crucial discussions about the creation of AI policy and regulations.

“That’s why we’re so pleased and proud to be bringing their views to the Paris AI Action Summit today. We hope that this manifesto will help decision makers better understand the unique challenges they face and highlight the importance of listening to their views.”

Professor Colin Bailey CBE, President and Principal of Queen Mary University of London: “As with all emerging technologies, it’s important that policies and regulations are developed which support the safe and ethical use of AI. It’s also vital that the voices of all impacted by the use of this technology are heard and considered when developing these policy regulations. For too long the voices of children, whose education, careers and lives will be most impacted by AI, have been left out of conversations regarding the use of AI.  This has to change!

“That’s why I’m so proud Queen Mary University has partnered with The Alan Turing Institute to put children’s voices at the heart of discussions about how AI impacts their lives, today and in the future."

The children’s AI summit was organised with support from the LEGO Group, Elevate Great, and EY, with collaboration from the NSPCC, National Youth Theatre, the PSHE Association, Teens in AI, 5RightsFoundation, Children’s Parliament and The Children’s Media Foundation.