This week the ‘Women in Data Science and AI’ project at The Alan Turing Institute welcomes the release of our report ‘The Digital Revolution: Implications for Gender Equality and Women’s Rights 25 years after Beijing’,[1] commissioned by UN Women. Through an intersectional feminist lens,[2] the paper examines the social, political and economic factors that underpin the design and use of digital technologies in order to explore the opportunities and risks of such systems for gender equality across high, low- and middle-income countries. The report focuses on data-driven technologies such as artificial intelligence (AI) and machine learning in three substantive areas: education, work and social/welfare services.

Our paper was written just as the coronavirus began to take hold. Now, only a few months later – but after what feels like years of change, with many uncertainties ahead – it is important to reflect upon its implications in the ‘new world’.

A key argument in the report is that the underrepresentation of women and lack of diversity in the tech sector partakes in a feedback loop that reproduces existing (offline) structural inequalities,[3] encoding and even amplifying biases in AI systems and other technical products.[4] Such biases can seep into the construction of AI via (gender) data gaps (e.g. data sets used to train machine learning algorithms may under-represent certain groups or encode historical bias against already marginalised groups), and biased modelling processes (due to certain assumptions or decisions made by developers). As such, we argue that we cannot understand technologies – particularly data-driven, decision-making systems - as autonomous, neutral tools. Technology is not gender-, nor race-, nor socioeconomic-, nor disability-, nor age-neutral. It is not, as it is often promoted and perceived, objective and ‘inevitable’. Rather, tech is deeply political, inscribed with the preferences, values and choices of those who build it, and shaped by existing (often invisible) power structures in the world, risking harm to those (generally the most underprivileged in society) not represented in its production.

Digital technology has never been more relevant than now, during the COVID era, being leveraged in a number of promising ways in the response. But this is by no means the whole story. The pandemic and Black Lives Matter (BLM) protests have accentuated inequities and power structures globally, both online and offline. The COVID-19 global public health crisis has collided with existing, deeply embedded societal inequalities, catalysing another crisis in which many of the detrimental social and economic effects of the pandemic are being felt disproportionately by the least ‘powerful’ among us. Pre-pandemic concerns that data-driven technologies could reinforce entrenched inequities and harm those historically disadvantaged, as discussed in our paper, have thus intensified given the disparate impacts of the crisis on certain social groups.

The pandemic has not only exposed but exacerbated existing inequalities, for example, having disproportionate economic and social impact on women. The Women’s Budget Group point out that in the UK, 77% of healthcare workers are women, as are 83% of the social care workforce. Of the 3,200,000 workers in ‘high risk’ roles, 77% are women. With schools and nurseries closed, an increase in responsibility for unpaid care work also mainly fell to women, often struggling to juggle careers from home with domestic work. Other gendered impacts include a rise in domestic violence against women and girls during lockdown (the ‘Shadow Pandemic’); widening digital gender gaps and new barriers that girls, particularly in low-income countries, face in receiving a formal education; and estimates that women’s jobs are 1.8 times more vulnerable to the crisis than men’s jobs. Thus, in our recent report, the policy recommendations to advance gender equality within the digital society are now even more pertinent for mitigating worsening issues around technology and inequality in the pandemic. Here I draw out two of the nine recommendations from the paper, emphasising their importance going forward:

  1. Ethical frameworks for auditing, monitoring and governance of (AI) technologies must put (gender) equality at their core: In recent decades, a small group of large technology companies based in the Global North has emerged as a dominant force in the global economy. Recent technological initiatives such as digital contact tracing during the COVID pandemic (and potential use of facial recognition software during the BLM protests) make their monopolisation of markets and power (over data, in particular) now more palpable than ever. As such, it is critical that we attend to the human (and therefore, women’s) rights implications of this increasing concentration of economic and political power in the tech sector. A significant way to do so – to begin to regulate and hold accountable big tech - is through ethical frameworks centred around equality (e.g. administered by national AI ethics councils). Crucially, this goes far beyond technical debiasing methods, and includes determining what types of technological systems should and should not be used in different contexts. As we comment in our paper, intersectional feminist scholarship as well as input from stakeholders with lived experience of discrimination should inform ethical frameworks, and a number of promising initiatives are already effecting positive steps in this space.
  2. National governments must tackle the gender data gap both in terms of quality and quantity, while maintaining privacy and data protection as the highest concern: The ‘gender data gap’ is the failure to collect sufficient gender-disaggregated data. Important data sets about women are missing or incomplete, and data are often of low quality and availability (as we found in our own project work). The need to tackle the gender data gap is extremely important in the current pandemic, and valuable projects such as Women Count by UN Women and WHO and Global Health 50/50 are collecting and collating data on gendered impacts of the coronavirus.[5] Disaggregating data by gender is a key first step, but policymakers and the private sector must also push to disaggregate data by other facets such as race so that women are not to be understood as a homogeneous group. Disaggregated data can be a tool for justice and accountability, particularly in the current climate, and governments should invest in strong data infrastructures, collecting and deploying such data in transparent and ethical ways.

Going forward we must ensure that technologies foster equality instead of becoming assimilated into the dominant structures of power. Pertinently, the era of the pandemic has ‘opened up’ a space in which to rethink and redress – among other fundamental problems – equality and the role that technical systems ought to play. At a moment when technology is increasingly being marshalled to make choices of global consequence, we have an opportunity to emerge from these crises with a greater sense of the inequities that surround us, and greater commitment to working towards justice through more inclusive and democratic practices of technological research and development.

Read the report and visit the women in data science and AI project hub for a curated set of resources on getting into tech, building a data science/AI career, and creating fair and equitable AI.


[1] The paper is part of a large-scale review of progress on the implementation of the Beijing Declaration and Platform for Action (a progressive ‘blueprint’ for advancing women’s rights drawn up in 1995 during the Fourth World Conference on Women), twenty-five years after its adoption.

[2] Gender intersects with multiple aspects of difference and disadvantage. So, for example, women who are poor or belong to racial minorities can experience negative effects of digitalisation and automation more acutely.

[3] The systemic inequality of opportunity for women in the workplace (e.g. through ‘masculine defaults’ and stereotypes, hostile cultures, annexing of prestige fields and leadership roles, and non-gender-inclusive labour market policies) severely limits their participation in the development of digital technologies.

[4] Emerging constructions of bias in AI include, for instance, recruiting tools biased against women.

[5] Initiatives such as Data for Black Lives also tackle racial data gaps.