This Interest Group brings together researchers with an interest in fairness, transparency and privacy from across The Alan Turing Institute.
There are increasingly many examples where data collection and analysis risks oversharing personal information, enshrining biased decision making into code, or giving unwelcome decisions without explanation or recourse. A large part of these problems requires a technical approach in order to develop:
- Robust privacy-preserving data analysis techniques which ensure that inappropriate information is not disseminated.
- Mathematically provable methods which ensure that minority groups or those with protected characteristics, such as age, gender, sexuality, religion or disability status, are not discriminated against inappropriately by automated systems.
- Intelligible explanations of complex processes used by, and decisions emerging from, machine learning systems.
In many cases, adding these measures will introduce inevitable tradeoffs. We aim to identify the best achievable tradeoffs, and help to promote discussion about appropriate standards and social policy.