In this long read, David Leslie reflects on Ofqual’s unforced error in its approach to A-level marking and outlines responsible and reflexive ways of acting to overcome future challenges in the data-intensive innovation environment.
Masked student protestors have been out again, this time carrying signs like “judge potential not postcode” and chanting “f*** the algorithm” in the wake of the A-level grading controversy. The presence of the masks reminds us that, despite a spell of seeming calm within the eye of a vicious storm, we are still in the middle of a deadly pandemic. The presence of the protestors reminds us that the pandemic is itself a reckoning.
In early June, when the statue of 17th century slave trader Edward Colston was bound at the neck and feet, toppled from its plinth and hoisted into Bristol Harbour, one aspect of such a reckoning came to the fore. Beyond the globally felt reverberations of George Floyd’s murder, the horrifying effects of systemic racism and structural injustice lay manifest in the brutally disproportionate impact of the SARS-CoV2 outbreak on ethnic minorities and other vulnerable communities in the UK. The same week of Colston’s dive into the dock, Public Health England quietly published its first report on “Disparities in the risks and outcomes of COVID-19.” The review supported widespread analyses that emphasised how the adverse effects of historically entrenched patterns of discrimination manifest in the social production of pandemic vulnerability and higher mortality rates for communities of colour and underserved populations.
Now, months later, en route to perhaps the most consequential U-turn in the history of the Department of Education, justified public outrage has again risen its head on the streets. Amidst the debilitating rupture of normalcy triggered by the pandemic, the rising generation of students have not only had their hopes, dreams and hard work ground down into tabular data and fed into a government algorithm built largely behind closed doors, with little public licence and even less accountability; at bottom, the moral injury that many of them have experienced is rooted in the same subtle manacles of structural inequality that have been exploited by the pandemic itself.
Ofqual’s standardisation calculus adjusted grades on the basis of the past performance of historically unequal institutions (as in, unequal resources, class sizes and student support) thereby discriminating by proxy against those of disadvantaged classes, races and ethnicities all while bolstering the prospects of the privileged. It favoured well-heeled private schools with smaller class sizes by honouring their predicted grades. And it placed the onus of appeal for those treated inequitably by the process on students who were already materially disadvantaged, thereby compounding discrimination with an impoverishment of recourse.
The enraged outcries of a nation might have been predictable. But, shrouded in secrecy and disconnected from the individuals and communities most impacted by the grade adjustments, the development and deployment Ofqual’s standardisation model was destined to fly blind into this second public reckoning. It may well be that, in fact, the reflexivity of members of the public—their critical sense of the moral stakes surrounding the Government’s use of this algorithmic tool—was ahead of that of the regulators themselves. From the protest signs and social media posts to interviews on news programs, references to algorithmic bias and fairness as well as to central concepts of data ethics and human-centred innovation were present seemingly everywhere. Indeed, the dark image projected by the brazen behaviour of the regulator—a Blakean picture where the hidden machinery of cold, discriminatory calculation churns away behind the windowless brick walls of the dark Satanic mills of the algorithmic age—was no longer acceptable to the moral imagination and the cultural common sense of those who revolted.
This reaction signals two things: first, it suggests that, perhaps unlike the anti-racism protests breaking out all over the world, this public reckoning was entirely avoidable—the unforced error of a quango caught unawares of the critical and transformative spirit of the times. Ofqual was one huge step behind. Second, it indicates that the disruption of the pandemic and the demand for new moral orders emerging from within both the protests and the public discourse are, as Arundhati Roy has written, as a much a portal as they are a reckoning. They are a pathway between one world and the next, an opportunity “to break with the past and imagine the world anew.”

The challenge of rectifying the felt experience of moral harm so palpable in the chants and cries of the student protestors cannot and will not be addressed by “stitching our future to our past.” Rather, it can be addressed only by releasing the moral potential contained immanently the critical history of the present. Stepping with poise into another, better society of tomorrow requires us to outstretch a lamplight set aglow by the hard-gained moral-practical insights that have informed and motivated the present impetus to societal transformation itself.
Some months ago, we published an article, “Tackling COVID-19 through Responsible AI Innovation: Five Steps in the Right Direction,” in the Harvard Data Science Review that attempts to engage in exactly this kind of lamplighting in the context of the pandemic. It emphasises that confronting the societal problems presented by the onset of COVID-19 through responsible data-driven innovation entails drawing on the critical and normative resources provided by applied moral thinking, responsible research and innovation, science and technology studies, and AI ethics. And, it sets out several steps needed to start down a practice-based path to responsible innovation that is centred on open, interdisciplinary, accountable, equitable, and democratically governed processes. Here are a few of these that Ofqual may have done well to heed and should do moving into the future:
- Open research and share data responsibly: Opening research to outside eyes, especially in the context of a public health crisis, helps to secure and cultivate public trust through reproducibility, replicability and research integrity. Had Ofqual—in the public interest and with a mind to producing the most sound and reliable statistical model—opened their research to wider peer review, expert assessment and scientific oversight, it is very likely that many of the dubious design choices of their standardisation procedure would have been caught and fixed in advance of adversely effecting many thousands of young lives. Instead, the regulator responded to an offer of expert guidance from the Royal Statistical Society by demanding that their fellows sign a five-year non-disclosure agreement that would have well-nigh enshrouded the collaborative process in darkness and mystery.
- CARE and Act through responsible research and innovation: Putting into practice proper governance mechanisms for responsible research and innovation requires that policy makers and innovators alike proceed reflectively and deliberatively in anticipating societal risks and benefits; it demands that this anticipatory reflection be informed by open and inclusive dialogue with affected stakeholders; and it entails that all of these stakeholders be involved in participatory co-creation across the policy and innovation lifecycle. To this end and building on EPSRC’s 2013 AREA protocol, we proposed a CARE and Act framework (Consider Context, Anticipate Impacts, Reflect on Purposes, Engage Inclusively, and Act Responsibly) as a heuristic to guide reflexive practices across the policy and innovation workflow.
Had Ofqual taken up even the first of these measures, i.e. carefully considering the context of the pandemic, it would have gained a working understanding of some of the circumstantial pitfalls of public health crises that have been flagged up and well documented by the World Health Organisation, The Nuffield Council on Bioethics, and the Council for International Organizations of Medical Sciences. These include being aware of the magnified harmful effects of pandemics on vulnerable and disadvantaged communities, being vigilant in analysing the protracted effects of technology and policy interventions given mass disruptions of public order and of social, moral, political, and legal norms, and being proactive in addressing the challenges of compromised individual and community consent given the duress of mass illness and the urgency surrounding the pandemic response.
- Generate and Cultivate Public Trust Through Transparency, Accountability, and Consent: Building reason-based public confidence in the interventions like Ofqual’s attempted grade standardisation requires end-to-end transparency to establish that design, discovery, and implementation processes have been undertaken responsibly and that outcomes are appropriately explainable and can be conveyed in plain language to all affected parties. Likewise, it entails the need for end-to-end accountability so that, if things go wrong, responsible parties can be sufficiently held answerable.
Such regimes of transparency and accountability should facilitate informed community and individual consent that reflects the contexts and reasonable expectations of affected stakeholders. Trust-building through active community consultation should, in this sense, be utilised to foster the development of equal and respectful relationships of reciprocal communication—true partnerships between citizens and Government. All three of these dimensions of responsible innovation practices (transparency, accountability, and consent) form the core of a recent collaboration between the Information Commissioner’s Office and The Alan Turing Institute called Project ExplAIn, which drew on citizens’ juries, public consultation, and desk-based research to build a detailed framework of what explanation means in the context of algorithmic decision-support. The co-produced guidance published as a result, Explaining Decisions Made with AI, should become required reading for any organisation seeking to explain and justify complex, algorithm-supported decision-making to individuals.
- Foster equitable innovation and protect the interests of the vulnerable: As is widely known (and well-expressed in both academic and journalistic accounts of the ethical hazards of contemporary digital society), patterns of social inequity, marginalisation, and injustice are often ‘baked in’ to the data distributions on which statistical models are built and machine learning systems learn. Making this fact part of the running knowledge in data-intensive innovation environments is an essential precondition for instituting discrimination-aware practices that are properly guided by critical self-assessment and the need for vigorous bias mitigation. More than this, the robust pursuit of equitable and just data-driven innovation requires that those who are scanning the horizons of possibility for and the use-contexts of potential innovation interventions, first and foremost, take into account the likely impacts of these interventions on the interests of the vulnerable and the underserved. Only in this way, can the equality rights of each and every citizen as well as their claims for social justice be given due regard. In these tasks, Ofqual has categorically failed.
As we now emerge from this Ofqual debacle—a society shaken by technocratic whiplash and yet whose future is more digital, more data intensive and more connected than ever—it is critical that we resist the longing for a hasty return to the cold comforts of normality. Neither blanket apologies about ham-fisted harms done nor oblique, handwaving appeals to impossible tasks undertaken in the exceptional or unprecedented circumstances of the pandemic are sufficient to move us forward toward a time when the public can invest trust in its Government to engage in inclusive, equitable, democratically governed, and ethically informed practices of innovation.
The hard labour of structural change and the transformation of institutional attitudes and cultures happens only through long-term, forward-looking commitment coupled with continuous critical self-reflection. This latter task, however, is as much about taking a sober look in the mirror as it is about being willing to leave our prior, lesser selves behind. As Roy tells us, passing through the portal to a different future requires us to be willing to part with the heaviest of our baggage: “We can choose to walk through it, dragging the carcasses of our prejudice and hatred, our avarice, our data banks and dead ideas, our dead rivers and smoky skies behind us. Or we can walk through lightly, with little luggage, ready to imagine another world. And ready to fight for it.”