Exploring the Ethical Implications of AI in Finance: Insights from CFOs

One may not have realized it, but artificial intelligence (AI) made its way into the financial sector transactions several years ago. Even unlocking a banking app with facial recognition is AI as is the use of a chatbot to answer customer queries on the bank’s website. It is also used for risk management, fraud detection and prevention with robotic process automation simplifying tedious tasks such as budgeting and forecasting. AI-driven data analytics using natural language processing algorithms are in use to reduce manual effort and enhance efficiencies across the process of entire data extraction processes at banks and financial institutions. 

From a customer’s point of view, robo-investing and virtual assistants have democratized personal banking and made it available to larger sections of the population. PE and VC funds and accounting companies use AI to sift through company financials and sell side data to compare their performance and identify acquisition targets and startup investments. In the insurance industry, AI is used in damage assessments, underwritings and could soon find more use in credit decisions of whether one gets a loan or not based on credit scores.  While AI does provide a more personalized service to customers and helps offer customized data, there is a larger question of its ethical use that keeps popping up. There are a number of questions surrounding responsible implementation and regulation of AI in finance, and very few clear answers, if any. According to Deloitte’s 2023 CFO Signals Survey, only 25% of the respondent CFOs are planning for the regulations around and ethics around the use of AI in finance. 

Systemic risks posed by ethical standards

In a recent blog post on ethical standards in finance, the Institute of Chartered Accountants in England and Wales (ICAEW) notes that AI’s ability to predict an individual’s profitability across a lifetime, there is every chance that profitable consumer groups benefit while others get neglected. Making a case for more transparency around how AI models get built, the institute says historical data could reflect biased lending decisions or systemic disparities, which may further get perpetuated. Elaborating on the same, the post notes that AI models differentiate between correlation and causation as algorithms could exhibit bias when using proxy variables around race and gender. It cites the example of using postcode as a factor for credit worthiness, which could result in certain neighborhoods being unfairly associated with socioeconomic conditions around specific demographics. Presented below are some key ethical considerations: 

Transparency and Explainability: AI systems are often complex and opaque, making it difficult for humans to understand how they work and make decisions: this is often referred to as the “black box problem”. It makes it difficult to trust decisions made by AI systems and to hold them accountable for their actions. “The regulation and guidance is still developing, but all of the discussion is that it won’t be acceptable just to say, ‘The computer says, no.’ That’s going to be unacceptable,” says Luke Scanlon, Head of Fintech Propositions, Pinsent Masons, in a blog post shared by CharteredBanker.com

“There’s a lot of thought around traceability of decisions, being able to trace that decision from where it came from and whether all the right steps were taken along the way. That’s the evidence that the financial institution should have in place, in terms of what they have to disclose… There is a level of commercial sensitivity – so banks and technology providers are not likely to be required to share source code or other commercially sensitive details. But there has to be a balance in protecting commercial interests and disclosing how decisions are made,” Scanlon says.  

There are a number of ways to address the transparency concerns about AI in finance. One approach is to develop AI systems that are more transparent. This can be done by making the algorithms that AI systems use more accessible to humans, or by providing humans with more information about the data that AI systems are trained on. Another approach is to develop AI systems that are more explainable. This can be done by developing algorithms that can explain their decisions in a way that is understandable to humans.

Bias and discrimination: Contrary to popular perception, AI systems can be biased. AI algorithms are trained on large amounts of data and when the data on which the algorithm is trained does not reflect the actual reality in which the AI is meant to operate, it leads to AI model bias. This can lead to unfair or discriminatory outcomes, particularly in cases where the organization is unaware of the bias latent in the training data. This poses a serious danger to organizations and their stakeholders. In order to address this, developers should have a deep knowledge of the training data and be able to make it more representative of the target operational environment. Another option is to develop AI tools that can identify and remove inherent biases in training data for AI algorithms. 

One well-known case from 2019 was that of tech entrepreneur David Heinemeier Hansson, who said that despite his wife’s higher credit score and their joint tax returns, he had received an Apple Card with a limit 20 times higher than hers, and accused the algorithm behind Apple Cards of gender discrimination. Apple was unable to identify the exact problem that resulted in this outcome, and New York’s Department of Financial Services even opened a probe into Goldman Sachs’ credit card practices. “Finding bias in models is fine, as long as it’s before production. By the time you’re in production, you’re in trouble,” said Ted Kwartler, vice president of Trusted AI at DataRobot.


Image via Deloitte

Privacy: AI systems can collect and process large amounts of personal data, raising concerns about privacy and data protection. The banking and financial services industry deals with huge sets of sensitive data, and any breach can be catastrophic. Furthermore, customers may not be comfortable with their personal financial information being processed and analyzed, and there are legitimate concerns about the future use of such data by an independent algorithm. One approach to address privacy concerns is to use anonymized data. Another is to obtain consent from people before collecting or using their personal data.

Adherence to data protection laws such as the EU’s General Data Protection Regulations (GDPR) and AI Act, the UK’s AI governance regulations, and the California Consumer Privacy Act (CCPA)  – and having stringent data security protocols such as AML (Anti-Money Laundering) and KYC (Know Your Customer) processes  – are crucial for AI tools in finance. 

Accountability: AI systems can make decisions that have a significant impact on people’s lives, but who is to be held accountable when an AI goes wrong? Accountability and responsibility are complex questions when it comes to AI, be it a black box AI or an explainable AI system. AI decisions are made based on data points as well as training, and human decision-making is often based on less quantifiable factors. 

Giles Cuthbert, MD, Chartered Banking Institute, feels that In the end, it becomes about responsibility, but that’s not always clear. You may think, as a customer of the bank, the bank’s responsibility is clear. But then we start looking at things like Open Banking, where we may have used another channel to authorize a bank transaction and suddenly we have created this mesh of responsibility that becomes hard to disentangle.”

Point 5 of UNESCO’s recommended human rights approach to AI deals with responsibility and accountability: “AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.” There must be clear rules and guidelines for the use and deployment of AI tools in finance, and how accountability and responsibility devolve down the user chain.

In Conclusion…

The integration of AI in finance brings tremendous potential for innovation and efficiency. However, it also presents significant ethical implications that need to be addressed. Transparency and explainability, bias and discrimination, data privacy and security, human accountability and responsibility, and the potential economic impact are key areas of concern.

Financial institutions must strive for transparency and explainability in their AI systems to build trust and enable accountability. Efforts should be made to address biases and discrimination by ensuring representative and unbiased training data, as well as developing tools to identify and mitigate bias in AI algorithms.

“In no other field is the ethical compass more relevant than in artificial intelligence. These general-purpose technologies are re-shaping the way we work, interact, and live. The world is set to change at a pace not seen since the deployment of the printing press six centuries ago. AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms.” said Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO.