Alexa, Siri, Eno and Kai have rapidly become an integral part of our daily lives. They act as our personal assistants and smart home hubs, as well as chatbots advising us on customer queries. Robo-advisors are a special type of chatbots. They provide specific financial advice such as investment planning and investment portfolio management and AI is the technology underpinning chatbots and robo-advisors.
Chatbots which offer customer service are often more cost-effective and faster than humans when performing repetitive tasks. Data on customer preferences and interests enables banks to offer relevant, tailored and personalised products to customers. Interestingly, customers seem to trust AI more on financial advice regarding wealth management and financial planning.[1] The absence of self-interest from chatbots appears to be the reason behind this trust amongst consumers.
Impartiality
Impartiality and the lack of judgement are reasons why customers like robo-advice in wealth management. Robo-advice is said to be cheaper, quicker and more personalised than human advisers due to the use of predictive analytics and machine learning. Both the Financial Services Authority in the UK and the Department of Labour in the US agree that robo-advice can be useful and act as valuable tools for investors. Algorithms are increasingly playing a role in analysing customers’ financial savings and devising retirement plans. Money is a potentially difficult and embarrassing topic for some people to discuss, especially spending habits. Speaking to a chatbot or robo-advisor takes away the embarrassment.
A recent survey reveals that 68% of customers would be happy and prepared to use robo-advice in their retirement planning. Efficiency in robo-advice leads to fee reduction for customers. Yet, humans are still preferred to advise customers in relation to complex financial products such as equity derivatives. Humans are also preferred when customers wish to complain or discuss a complicated matter or situation.
Bias and discrimination
Although chatbots will learn over time and improve, it is evident that chatbots have their limitations. Complex advice seems to be outside the current expertise of robo-advisers and chatbots. Also, bias can occur in the training and learning stages of machine learning. Machine learning is so sophisticated that even developers who have created the algorithms might not fully comprehend how they have evolved. The danger of this is that the algorithms might inadvertently create unconscious bias even where there is no intention to do so.[2]
Let us consider two hypothetical scenarios where algorithms are used to predict consumers’ creditworthiness. Algorithms which use educational levels of consumers as a criterion for creditworthiness monitor their spelling mistakes on internet searches. If educational levels are lower in women in a particular race, then this can lead to indirect sex and race discrimination.[3] Algorithms which monitor consumers’ online shopping at discount stores predict that there is a higher risk of default of loans amongst such shoppers. If these discount stores are disproportionately located in ethnic minority communities, again, this can lead to indirect race discrimination.
In the United States, the Supreme Court held in a 2015 case of Texas Department of Housing and Community Affairs v. Inclusive Communities Project Incorporation that housing and lending policies which indirectly discriminate against race and gender violated the Fair Housing Act. There is a risk that robo-advisors could potentially replicate the same type of discrimination. Algorithms are only free from conflicts of interest if they are programmed impartially. Complete absence of conflicts of interest from robo-advisers is extremely difficult, and potentially impossible, because some robo-advisers sometimes rely on third parties such as brokers for their transactions. Firm-client conflicts can arise because the algorithms can prioritise what is best for the firm, not the client. Human programmers can be influenced by incentives of the firm, thus influencing the algorithms.
Robo-advisors and chatbots will increasingly take over the more routine and straightforward tasks in banking. Yet, humans still play an essential role because robo-advisers cannot provide human judgement or real-time sensitivity. To a large extent, robo-advisers can take care of our financial matters but they cannot care about us.
Humans will and should control machines to deliver reliable and trustworthy financial services. A number of challenges such as bias and discrimination have to be addressed by legislators and appropriate regulation should be in place for responsible use of such tools in the financial industry, so we can be confident in trusting them.
[1]G. Scopino, ‘Do Automated Trading Systems Dream of Manipulating the Price of Futures Contracts? Policing Markets for Improper Trading Practices by Algorithmic Robots’ [2015] 67 FLA. L. REV. 221; A. Campbell, ‘Artificial Intelligence and the Future of Financial Regulation’ Risk.net (16 October 2014) <https://www.risk.net/regulation/2374890/artificial-intelligence-and-the-future-of-financial-regulation>
[2] <https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/>; DeBrusk, C. (2018). The Risk of Machine-Learning Bias (And How to Prevent It). MIT Sloan Management Review.
[3] K. Petrasic, B. Saul, J. Greig and M. Bornfreund ‘Algorithms and bias: What lenders need to know’ (2017) White and Case <https://www.whitecase.com/publications/insight/algorithms-and-bias-what-lenders-need-know>.
Conclusion
Earlier this week, the Turing published a new landscaping report, Artificial intelligence in finance, by Bonnie Buchanan. This week a short series of AI in finance guest blogs explores different angles such as the important questions we should be asking about the use of AI in the financial industry. For more information about our work in this area, or to learn how to get involved, visit our finance and economics research programme page.