Introduction

The Financial Conduct Authority (FCA) and The Alan Turing Institute are working on a year-long collaboration on AI transparency. In this blog post, we explain the motivation for pursuing such a project and present an initial framework for thinking about transparency needs in relation to machine learning in financial markets

A recent survey on machine learning published by the FCA and the Bank of England highlights that financial services are witnessing rapidly growing interest in artificial intelligence (AI). While the use of AI has the potential to enable positive transformations across the industry, it also raises important ethical and regulatory questions. Especially when they have a significant impact on consumers, AI systems must be designed and implemented in ways that are safe and ethical. From a public policy perspective, there is a role for government and regulators to help define what these objectives mean in practice.

The Information Commissioner’s Office has today launched its own consultation on the use of AI with draft proposals on how to audit risk, governance and accountability in AI applications.

We believe the concept of transparency provides a useful lens for reflecting on relevant ethical and regulatory issues and thinking about strategies to address them. As such, transparency can play a key role in the pursuit of responsible innovation by helping to secure the benefits of digital transformation in financial services. Given the wide range of AI use cases across the industry, this role extends equally to retail and wholesale financial markets as well as regulation by public bodies.

This blog post aims to contribute to the debate on the role of AI transparency as an enabler of beneficial innovation. It proposes a high-level framework for thinking about transparency needs concerning uses of AI in financial markets, resonating with recent work by the OECD, the European Commission’s High-Level Expert Group on AI and the Information Commissioner’s Office (ICO). As part of their joint project, the FCA and The Alan Turing Institute propose to explore the practical application of this framework in workshops with industry and civil society stakeholders.

Generally speaking, AI transparency can be defined as stakeholders having access to relevant information about a given AI system. In practice, four guiding questions are worth emphasising:

  1. Why is transparency important?
  2. What types of information are relevant?
  3. Who should have access to these types of information?
  4. When does it matter?

Why is transparency important?

The need or desire to access information about a given AI system may be motivated by a variety of reasons — there are a diverse range of concerns that may be addressed through transparency measures. In thinking about transparency interests, it is thus important to bear in mind the diversity of rationales that may underpin them.

One important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems.

Providing information may, for instance, address concerns about a particular AI system’s performance, reliability and robustness; discrimination and unfair treatment; data management and privacy; or user competence and accountability.

There are further potential transparency rationales in the context of customer-facing applications. For instance, transparency may enable customers to understand and — where appropriate — challenge the basis of particular outcomes. An example would be an unfavourable loan decision based on an algorithmic creditworthiness assessment that involved factually incorrect information.

Information about the factors that determine outcomes may also enable customers to make informed choices about their behaviour with a view to achieving favourable outcomes. An illustration for this rationale would be the value to customers of knowing that credit scores depend on the frequency of late payments.

What types of information are relevant?

Transparency interests relate to a wide range of different types of information. In mapping them out, we can distinguish two broad categories.

The first category concerns information about the ‘inner workings’ of the AI model. Potentially relevant information in this category includes the model code itself as well as any other information that may shed light on the relationship between model inputs and model outputs.

We may refer to this category as model-related information and, correspondingly, to the accessibility of such information as model transparency.

The second category consists of different types of information that concern the process of developing and using the AI system in question. Examples of such process-related information include the simple information that an AI system is being deployed in a given context, as well as a wide range of more detailed pieces of information relating to the different phases of the system’s lifecycle.

There are different ways of breaking down AI lifecycle phases. In addition to a broad distinction between development and use, a detailed breakdown may distinguish, for example, between business case development and problem formulation, design, data procurement, building, testing and validation, deployment, and monitoring.[1]

AI transparency graphic

Questions regarding model transparency have been prominent in AI governance discussions. These are reflected in debates about the merits of inherently interpretable model types and approaches to achieving post-hoc explainability, in which an AI process is analysed using supplemental methods and tools to shed light on how it reaches its conclusions.

At the same time, many common concerns raise process-related questions. Information about the quality of the data that was used in developing an algorithmic decision-support tool, for example, can play an important role in addressing concerns about bias. Process transparency thus represents an equally important component of the AI transparency toolbox and can play a significant role in demonstrating trustworthiness and advancing responsible innovation. Rather than narrowly focusing on questions of model transparency, a balanced perspective on transparency needs will thus be based on a broader assessment of possible transparency measures that involve model-related as well as process-related information.

As well as relating to different phases in the AI lifecycle, process transparency can involve different levels of information. Such levels can be defined by whether disclosure of the information in question addresses any relevant concerns in a direct manner or in one of several conceivable indirect ways.

We may, for instance, distinguish between ‘ground-level’ information such as ‘What was the initial problem formulation?’ or ‘What were the results of the tests conducted in the validation phase?’, meta-level information such as ‘How was the initial problem formulation developed?’ or ‘What kinds of tests were conducted in the validation phase?’ and governance-level information such as ‘What corporate procedures govern the process of problem formulation/validation?’

Who should have access to relevant types of information?

There are a broad range of different stakeholder types that may have transparency interests in relation to a given AI system. A comprehensive approach to thinking about AI transparency will consider all relevant stakeholders and possible differences between their respective transparency interests.

The transparency interests associated with different stakeholder roles may, for instance, be motivated by different reasons, differ in strength, or relate to different types of information. Moreover, reasons against disclosing information to certain types of stakeholders may not apply to others.

Broad distinctions can be drawn between (i) stakeholders that are situated within the firm that deploys the AI system in question; (ii) the firm’s customers; (iii) stakeholders that perform a regulatory function; and (iv) other third parties, including a firm’s shareholders and the general public.

While it is easy to see that transparency interests may differ between these broad categories, there are also important distinctions within individual categories. In terms of the firm deploying an AI system, for example, there will be a range of distinct internal roles that have noteworthy but potentially differing transparency interests, such as users of the system, risk and compliance managers, customer service staff, or senior management.

When it comes to regulatory authorities, transparency interests may differ as well, depending, for instance, on the distinction between prudential and conduct-related concerns.

Regarding the information needs of consumers, there can be important differences between the needs that arise from the perspective of consumers in the role of individual customers and those that arise from the perspective of consumers as a collective or as citizens. Firm’s decisions about the provision of information should be sensitive to these differences.

When does the provision of different types of information matter?

The landscape of AI use cases in financial services is characterised by a high degree of diversity. Uses range from customer-facing to back-office applications, comprise retail and wholesale contexts, and include applications that do or do not involve personal data. Correspondingly, there is a significant amount of variation between use cases with respect to the ethical and regulatory concerns they may raise and the extent of, as well as the reasons behind, stakeholders’ interests in accessing different types of information. A well-informed approach to AI transparency will be sensitive to such variations.

Given that the profiles of stakeholders’ transparency interests are likely to differ from use case to use case, there may be strong arguments for making particular types of information accessible when dealing with some use cases but not when dealing with others.

Putting the pieces together – the ‘transparency matrix’

In summary, the task of developing an organisation’s approach to AI transparency involves identifying a wide range of potentially relevant types of information and deciding how to deal with each of them. Reflecting on the ‘why’, ‘who’ and ‘when’ of transparency raises the following three especially salient considerations:

  • Rationale-dependence: decisions about the provision of different types of information may depend on the reasons that motivate stakeholders’ interests in transparency.
  • Stakeholder-specificity: decisions about the provision of different types of information may differ between stakeholder types (e.g. because their respective transparency interests are motivated by different reasons).
  • Use case-dependence: decisions about the provision of different types of information may depend on the nature of the particular use case (e.g. because the transparency interests of different stakeholder types and the reasons that motivate them vary between use cases).

Given these complexities, questions of AI transparency cannot be reduced to a simple question of ‘Which types of information should be made accessible?’ Instead, a nuanced approach to AI transparency will involve answering the following question:

‘For a certain type of AI use case, who should be given access to what types of information and for what reason?’

To answer this question, decision-makers may find it helpful to develop a ‘transparency matrix’ that, for a particular use case, maps different types of relevant information against different types of relevant stakeholders.

This matrix can then be used as a tool to structure a systematic assessment of transparency interests. It provides a basis for considering different stakeholder types one by one, identifying their respective reasons for caring about transparency, and, considering these reasons, evaluating the case for making the different types of information listed in the matrix accessible to a given stakeholder type.

This exercise, completed separately for different use cases, offers a way to integrate considerations of rationale-dependence, stakeholder-specificity and use case-dependence in a systematic manner.

The opportunities and risks associated with the use of AI models depend on context and vary from use case to use case. In the absence of a one-size-fits-all approach to AI transparency, a systematic framework such as the one set out above can assist in identifying transparency needs and deciding how best to respond to them, bringing into focus the respective roles of process-related and model-related information in demonstrating trustworthiness and contributing to beneficial innovation.

[1]For more detailed discussions of lifecycle phases, see ICO blog, https://ico.org.uk/about-the-ico/news-and-events/ai-blog-an-overview-of-the-auditing-framework-for-artificial-intelligence-and-its-core-components/ and Understanding AI Ethics and Safety (Turing), https://www.turing.ac.uk/research/publications/understanding-artificial-intelligence-ethics-and-safety


This blog is cross-posted on the FCA Insight webpage

The Alan Turing Institute will be hosting a session on AI, public policy and regulation at their showcase event AI UK (on 25 March from 09:00 – 10:45). This session, chaired by Professor Helen Margetts, will feature a panel discussion with Nick Cook (Director of Innovation at the FCA) and senior executives from other key regulatory bodies in the UK. The discussion will focus on the implications of AI and data science for regulators, including the governance questions raised by emerging technologies in financial services and other data-intensive industries. 

Tickets are on sale now. The full programme is available on the AI UK website.