Introduction
As AI technology continues to advance, small UK businesses must embrace its potential to drive business growth and stay competitive in the market. Achieving fairness using this technology is crucial to comply with regulatory and ethical standards, but also consists of a considerable challenge for small businesses, as most, lack the necessary skills and resources to adhere to existing ethical AI guidelines. This project aims to address the latter by enabling a proactive fairness review approach in the early stages of AI development, providing developer-oriented methods and tools to self-assess and monitor fairness.
This project is one of the winners of the UK government's Fairness Innovation Challenge, managed by the Department of Science, Innovation and Technology (DSIT) and funded by Innovate UK.
Explaining the science
Large Language Models (LLMs) are susceptible to bias similar to any neural architecture. Their complex and extensive structure, combined with the vastness of their training datasets, can inadvertently reinforce and amplify existing biases present in the data. This can result in unfair treatment and discrimination. In financial contexts, bias can have significant implications for both businesses and customers. For instance, an organization might reject a potentially profitable business, or a customer may be denied a service.
This project focuses specifically on ensuring fairness in LLMs' generation and classification capabilities within financial tasks. Our proactive approach involves a comprehensive evaluation of fairness and the implementation of measures to mitigate biases. This approach considers security and interaction challenges alongside fairness analysis, ensuring a holistic approach to addressing bias in financial applications.
More information available on the project's GitHub page.
Project aims
- Create design patterns for fair AI development: Patterns support transferring knowledge between different disciplines, producing concrete and actionable outputs, and ensuring effective technical development "by design". We have seen similar success in software development with object-oriented design patterns, which helped developers maintain their basic security and maintenance principles through coding.
- Develop continuous integration/continuous deployment (CI/CD) tools: Similar to static analysis and automated monitoring tools for security and privacy evaluation, developers will annotate, review and monitor fairness issues in their development flows.
- Empower developers to integrate fairness considerations into their AI systems: Leveraging our institute's internal support and close networks, we will publish a set of technical reports and educational content to support SMEs to improve their codebase for a proactive fairness monitoring.
Applications
The project's main goal is to make it easier for small businesses to develop AI, particularly LLMs in the finance domain, in a fair way right from the start. Within this goal, we aim to produce three main outputs:
- To create a set of recipes that can act like blueprints for fair AI development. These recipes are a set of instructions that can support developers in building fair AI systems naturally in the development process.
- To develop tools that make it simple for developers to check and continuously monitor if their AI systems are fair. These tools will be integrated into the development process to check for problems in their software. The idea is to catch fairness issues early on, making it easier for businesses to follow the rules.
- To support developers with educational content in small businesses. We will create reports and educational materials to help developers understand and integrate fairness into their AI systems.