Introduction

Internet filtering is now widely used to prevent access to restricted websites. However, this process can have disproportionate negative effects on minority groups due to the accidental blocking of legitimate sites via automated decision-making. This project uses a combination of machine learning and network measurement approaches to (1) identify blocked websites, and (2) analyse which groups are affected by accidental ‘overblocking’ of legitimate sources. The project will create new means to monitor and map blocking decisions, and identify ways to protect disadvantaged groups from discrimination via poorly implemented automated blocking systems.

Explaining the science

Internet filtering, sometimes known as censorship, is typically implemented on the basis of high-level decisions. Depending on the context, filtering may be applied to stop young people from accessing pornography on the internet, to prevent radicalisation, or to deter harmful behaviours such as eating disorders.

In many cases, there are anecdotal reports of negative ‘overblocking’ as a result of these decisions: sexual health charities and LGBTQ groups are mistakenly blocked as pornography, refugee support groups are filtered out for mentioning particular conflicts and regions of the world, and anorexia support groups are blocked for explicit discussion of eating disorders.

Due to the dynamic nature of online content, machine learning approaches have increasingly been considered as a potential solution both in keeping track of new content and in automatically adding pages to filter lists. When such systems are combined with automated decision-making processes, there is an increasing risk of bias and discrimination being structurally instantiated into the filtering systems. These concerns were outlined in a recent UK Council for Child Internet Safety (UKCCIS) working group report, which recommended white-listing particular sites to avoid overblocking. However, the report did not consider the negative impacts of these approaches in any detail.

Project aims

The key objective of this research is to determine how the factors that drive internet filtering can negatively affect vulnerable groups in society. As internet filtering becomes more widely employed, it is crucial to gain a more nuanced understanding of its effectiveness in regulating behaviour, the limitations of the approach, and the negative consequences of filtering decisions.

This project will take an ambitious, novel, and interdisciplinary approach that incorporates network measurement and data science techniques to analyse social data sources derived from web scraping and social media. Internet filtering and, more broadly, censorship affect the lives of hundreds of millions of internet users on a daily basis. The project will determine and describe key elements of such filtering, both in its application and in its wider effects on UK society.

Applications

The outcomes of this research will be analyses of the real-world effects of internet filtering decisions and their application on UK society, informed by international internet filtering behaviour. The outputs will support informed and evidence-based debate on the most effective ways to mitigate negative behaviour online and to protect vulnerable groups, while minimising negative fallout from the blocking of resources.

The project will aim to inform UK policy on filtering, overblocking, and discrimination by: studying the outcomes of existing, deployed filtering systems; exploring the potentially discriminatory effects that can arise from their naive application; and understanding the role and risks of automated approaches.

Organisers

Contact info

[email protected]