The FAIR programme brings together academia and industry to advance research and develop practical and scalable solutions needed to fully realise the transformational benefits of responsible adoption of AI across the financial services industry. The programme will also drive innovation that maintains public trust and meets regulatory expectations whilst acting as a beacon for responsible AI adoption globally.
Explaining the science
AI technologies have the potential to unlock significant growth for the financial services sector through novel personalised products and services, improved cost-efficiency, increased consumer confidence and more effective management of financial, systemic, and security risks. However, there are currently significant barriers to adoption of these technologies which stem from a capability deficit in translating high-level principles concerning trustworthy design, development, and deployment of AI technologies including safety, fairness, privacy-awareness, security, transparency, accountability, robustness, and resilience to concrete engineering, governance, and commercial practice.
In developing an actionable framework for trustworthy AI, the major research challenge that needs to be overcome lies in resolving the tensions and trade-offs which inevitably arise between all these aspects when considering specific application settings. FAIR seeks to realise high-performing solutions by developing cutting-edge AI methodologies drawing on cross-disciplinary expertise from statistics, computer science and mathematics, alongside the social science domains.
The project has five main research themes:
1. Robustness and Resilience will provide fundamental developments to the theory and practice of sequential decision making and will advance the state of the art of offline-to-online based learning theories.
2. Privacy and Security will lead to a greater understanding of the challenges of developing privacy-enhancing technologies (PETs) in the context of the Financial Services industry.
3. Fairness and Transparency focuses on interpretability and trust of AI-based decision making. This will move academic approaches to trustworthy AI from the lab into real-world practical use and will enable the development of methodologies for improving reliable human-machine performance that goes beyond myopically focusing only on the algorithm.
4. Verification and Accountability focuses on the theoretical foundations and software tools for continual validation and verification of AI components. This will involve providing certifiable guarantees of robustness against distributional, adversarial and strategic interventions, with respect to objectives such as safety and fairness.
5. Integration Environment will develop synthetic data generation methodologies, allowing statistically accurate but fictional data to be generated in a variety of settings. Synthetic Data Generators (SGDs) will enable researchers to work with data in safe environments and to share and link data in settings when, currently, this is not possible due to regulatory or privacy constraints.
We are driven by our vision to enable the finance sector to leverage transformational benefits through the responsible adoption of AI by:
- Developing an actionable framework for safe and trustworthy deployment of AI in financial services, underpinned by foundational research methodologies formulated by cross-disciplinary and cross-sectoral teams.
- Developing digital sandbox environments to enable validation and testing and co evaluation of emerging technologies in a transparent manner.
- Identifying industry-wide standards and processes to address trade-offs between regulatory and ethical dimensions facing industry and regulators across a range of contexts and use-cases.
FAIR is a partnership between HSBC and The Alan Turing Institute, supported through an investment from EPSRC. It is one of eight business-led Prosperity Partnerships announced in support of the government’s ambitious new Innovation Strategy. For a full list of funders and collaborators, please click here.