Algorithms shape the way we see the world and are being used to make decisions about increasingly sensitive parts of our lives, from our eligibility for a loan to the length of our sentence if we commit a serious crime.

But how do they work? How do we know how the decision was made and if it is fair? And what can we do if decisions made by machines contain prejudice or bias?

Innovation brings with it moral responsibilities which must be addressed if we want to create a data and AI enriched future for the benefit of all. We aim to design and deliver fair and ethical algorithms by bringing together cutting edge technical skills with expertise in ethics, law and policy.

What can data science and AI do?

  • Detect and remove bias in machine decisions
  • Develop practical approaches to providing appropriate transparency
  • Understand human behaviour to help identify bias
  • Understand human psychology to help to provide helpful explanations of algorithms
  • Protect personal and corporate privacy
  • Examine not just algorithms themselves, but how they are used in society
  • Tackle asymmetries of power and knowledge

What are the benefits for science, society, and the economy?

  • Instilling an ethically engaged approach to the development and use of data science and AI in society
  • New standards in government and private sector use of algorithms
  • Consideration of appropriate auditing of algorithms
  • Improving users’ understanding of and trust in how and why an algorithmic decision has been made
  • Updates to the governance or legal framework around algorithmic decision-making
  • Systems which provide needed ethical features from the outset