Introduction

On 29 July 2020, Rose Luckin, Professor of Learner Centred Design at the UCL Knowledge Lab in London, gave the first in our miniseries of Turing Lectures to a vast virtual audience. The Turing Lecture mini-series is designed to reflect on the use of AI and data science in a post-lockdown world.

Rose's lecture centred on the use of AI and tech in education - particularly in a virtual setting due to the pandemic.

In addition, she gives her personal perspective on the use of data and tech to decide exam results across the UK.

The use of data and tech to determine exam outcomes

There has understandably been a great deal of controversy about the use of algorithms to decide the results of the International Baccalaureate, which is taken in over 150 countries, as well as the English A level and GCSE exams. So, can AI be useful for this type of task? Yes, a well-designed AI can be helpful in assisting humans to make complex decisions about exam grades, but it must be well-designed. Which means that we must take great care to ensure that the data that the algorithm processes, as well as the algorithm itself is thoroughly tested, fair and unbiased. Plus, the whole package: the data and the algorithm must also be explainable to the people who it affects.

If we take the example of the A level grades, the exam regulator Ofqual has made public some of the details of the process that it used to make decisions about grades. We can see that the exam regulator wished to make sure that the 2020 grades had a similar value to the grades issued in previous years, in other words to standardise the grades.

Teachers were asked to provide a predicted grade for each student in each exam and to rank order their students for each subject. This data along with other data about the historical performance of the school attended by the pupil was used to test some of the different statistical standardisation models that are available. Ofqual selected the Direct Centre Performance model (DCP) algorithm, because it performed accurately when using data from previous years to predict grades, and was easy to implement consistently. The DCP algorithm predicts grades for each school or college based on the historical performance of that school or college adjusted by any changes in the prior performance of this years’ students as compared to previous years’ students at that school or college.

So did this approach work? Well, the official statistics from Ofqual show that A level results at grade A and above in England increased by 2.4% compared to 2019, and that 96.4% of final grades awarded were the same or within one grade of the teacher assessed grade. However, this overall positivity hides a much more complex picture of large variances between different schools and colleges. And let’s be clear, a grade difference between awarded and teacher assessed grades can have a big impact on students who either do or do not get to their preferred universities.

Figures released by Ofqual show that 39.1% of A level grades were downgraded from the teacher assessed grades, and the largest differences between the grade awarded and the teacher predicted grade were seen amongst pupils from the lowest socioeconomic backgrounds. By contrast, the increase in students achieving A or A* grades compared to 2019 was much higher at independent schools (4.9%) than state comprehensives (2%).

This would suggest that something has gone awry with the way that both data and algorithm were used for this year’s A level grades, but what could that be?

The precise details of the algorithm are not yet published, but even now, it is clear that the teacher assessment grade played little role in the calculations. The rank ordering was more influential, and yet teachers are more experienced at predicting grades than predicting these rank orderings. The influence of historical data skews the results towards repeating what a school or college has achieved in prior years rather than what a pupil has or is likely to achieve in the current year. And the adjustment for the prior performance of this years’ pupils will be impacted by the varying quality of the data available in each school or college.

For example, the timing and manner in which mock exams are taken is not the same across all schools and colleges. Additionally, when the algorithm was tested on previous years’ data there would have been no rank ordering from teachers available for inclusion, thus it was not a true test of the same algorithmic process that was used for 2020. And of course there is also a question that some might ask about the appropriateness of a process that prioritises standardisation above all else in a year that has been so disrupted and upsetting for students across the country.

All in all, there are plenty of inequalities in the data being used by the algorithm, and there are valid questions about the legitimacy of the algorithm itself. This should be enough for us to feel doubtful of the fairness and accuracy of this particular algorithmic approach.

However, that does not mean that algorithms and AI cannot be used for the purpose of decision support for grading decisions. What it does show is exactly how complex these decisions are and how carefully they must be designed, applied and tested before they are used for such important decisions as students’ high stakes exam results.

 

What challenges and opportunities have COVID-19 presented for EdTech’s adoption “in the classroom”? How have education-related AI/ML applications supported distance learning during the pandemic?

COVID-19 has been a game-changer for the adoption of educational technology in schools. Staff and schools who were reticent about using technology have been left with no alternative but to ‘give it a go’ and that has been an excellent way of showing teachers, learners and parents what can be achieved with educational technology. As one of the school leaders we interviewed recently told us:

We've, learned a lot during this period It’s almost been like three years of staff CPD in three months, you know, I couldn't have developed the staff in the same way from just doing staff meetings through me. So, there has been actually huge benefits in terms of CPD.

Put simply, educational technology has enabled students to continue learning with their teachers and friends even when not at school. The use of educational technology during the pandemic has included the use of some technologies that use AI and machine learning. For example, systems that use AI so that they can adapt the support each learner is given as they progress through the curriculum; systems that recommend the most suitable resources for a teacher or student to use for learning based on each learner’s particular needs, and voice-based interfaces that enable students and teachers to interact without needing to use a keyboard or a touch screen.

All has not been positive, however, and there have been some extremely challenging problems. Probably the most damaging has been the difficulty faced by students from disadvantaged backgrounds who do not have the technology to enable their child to take part in the education provided. Some children also lack a quiet place to learn and a supportive family environment.

The inevitable outcome is that these children are falling further behind with their learning and there will be a lot of work to be done to help them catch up in the Autumn and beyond. AI could really help with this problem by working with students individually to diagnose their needs and to provide carefully selected catch up learning opportunities and vital feedback for teachers. Such an AI enabled approach would speed up the process of catching-up and it would leave more time for the all-important social and emotional enrichment activities that students will need and for engaging with friends and teachers.

Unfortunately, the schools that most need this sort of AI technology do not have access to this technology. This is a situation that we must make every effort to correct.

 

What are the biggest ethical hazards of using AI/ML-based EdTech? What kinds of ethical challenges do patterns of historical bias and discrimination pose to the design and deployment of responsible EdTech?

There are many hazards that we need to be aware of - not least the risks involved in not applying AI when it could make a positive difference to learning.

However, if we focus on the use of AI Machine Learning, then I will highlight just three areas of potential risk: Firstly, we need to ensure that the data that is used to train AI algorithms is truly representative of the population to whom the AI is being applied. We also need to ensure that the person whose data is being used has given their ‘informed’ consent for their data to be used and that the data is stored safely and that privacy is protected.

The requirement for ‘informed’ consent is important and it can be challenging, because in order for someone to give their ‘informed’ consent, they must understand enough about what is going to be done with their data for them to be capable of being ‘informed’. In other words, they need to be educated about AI to at least some extent.

Secondly, in addition to being extremely careful about the data we select, we must also ensure that the algorithms that we design and use to process this data are well designed and, in the case of education, that they are written in a way that considers what we know about how human learning takes place. One of the problems with bias in AI is that there is a lack of diversity in the AI workforce, and we certainly need to do more to encourage more people from a richer and diverse pool to want to, and to be able to work in AI.

Another problem is that many courses in computer science and education have not until very recently included enough about ethics and ethical design and so there is a training need there that must be addressed.

Finally, there is the imperative of the AI. we need to think about what the AI is being used for – is its purpose ethical? The same data and the same algorithms can be used for different purposes and therefore we need to think carefully about why we are using AI and what it is helping us to achieve. For example, we might collect video and audio data of students learning and use this data to detect emotional changes, such as a learner becoming anxious when they are finding something hard to learn. Such an application of AI could be used to provide extra support for these learners, however, it could also be used to reduce their learning opportunities, because they are deemed to lack robustness when challenged.

What are your predictions for the classroom of 2030?

I hope that the classroom of 2030 will be one where AI and Human Intelligence are working in harmony to provide the best possible support for every learner. I would like to see an Intelligent Backbone for education. This would help us to achieve a situation where there is far less visible technology in the classroom, with learners engaged in less screen time and more human interaction.

The intelligent backbone would help teachers orchestrate collaborative problem solving and project-based work. It would also enable seamless learning across school, home and in between so that we are far more resilient to unforeseen circumstances like COVID-19. And, of course, the intelligent backbone would enable the best learning resources to be made available to all learners in the most appropriate way for each learner, class, year group or institution.

The intelligence would be able to suggest and make selections of resources on the basis of a detailed knowledge of each learners’ needs and it would be able to provide teachers with highly specific feedback about exactly where each of their students is excelling and exactly where they need help.

But most of all I hope that we will have realised that we must focus our education systems on the development of our human intelligence. In particular, the development of our very human abilities, the ones that cannot be automated, the ones that help us to understand ourselves and our intellect intimately and effectively, so that we know how to learn all that we need to learn, to work and learn with others and to flourish as we live our AI augmented lives.