ÌÇÐÄÊÓÆµ


AI and credit: How can we keep machines from reproducing social biases?

ai money
Credit: Pixabay/CC0 Public Domain

Artificial intelligence (AI) has revolutionized many fields in recent years, including the banking sector. There have been both positive and negative aspects of its implementation, in particular the issue of algorithmic discrimination in lending.

In Canada and more broadly around the world, the implementation of AI within major banks has led to while offering greater personalization of services.

According to the , the adoption of AI-based solutions is expected to double globally by 2025, reaching .

Some banks are more advanced, such as BMO Financial Group, which has created specific positions to oversee in order to remain competitive. As a result, thanks to AI, the global banking industry's profits could exceed US$2 trillion by 2028, representing growth of nearly .

As a professor at Laval University of and a , I was assisted in writing this analysis by Kandet Oumar Bah, author of a research project on algorithmic discrimination, and Aziza Halilem, an expert in governance and cyber risk at the French Prudential Supervision and Resolution Authority.

How does AI improve bank performance?

The integration of AI in the has already significantly optimized financial processes, with a . Combined with the growing capabilities of big data—for example, the massive collection of data—AI offers powerful analytics that can already .

It also makes it possible to monitor millions of transactions in real time, detect suspicious behavior and even preventively block certain fraudulent transactions. This is one of the uses implemented by .

In addition, platforms such as FICO, which specialize in AI-based decision analysis, help , refining their credit decisions through advanced predictive models.

Several banks around the world now rely on automated rating algorithms that can analyze numerous parameters, including , in a matter of seconds. In the credit market, these tools , particularly for "standard" cases, such as those with explicit loan guarantees.

But what about the other cases?

Formalizing injustice?

As American researchers Tambari Nuka and Amos Ogunola point out, the illusion that algorithms produce fair and objective predictions poses a .

Reviewing the , they warn against the temptation to blindly delegate the assessment of complex human behavior to automated systems. Several , including Canada's, have also expressed strong reservations about this, warning of the , particularly in assessing creditworthiness and solvency.

Although algorithms are technically neutral, they can amplify existing inequalities when training data is tainted by historical biases, particularly those inherited from systemic discrimination against certain groups. These biases not only result from explicit variables such as gender or ethnic origin, but also from indirect correlations with factors such as place of residence or type of employment.

For example, rating systems may assign lower credit limits to women, even in situations where they are financially equivalent to men. Analyzing variables such as postal codes and employment history can also lead to the exclusion of members of marginalized groups, such as racialized individuals, workers with irregular incomes, and recent immigrants.

Virginia Eubanks, a professor in the United States and an expert in , , showing how people living in historically disadvantaged neighborhoods or with atypical career paths are penalized by automated financial decisions based on biased data.

This raises a crucial question: how can we ensure that the automation of financial decisions helps reduce disparities in access to banking services?

Mitigating errors through inclusive finance

Several avenues are being explored in the scientific literature in response to these risks of discrimination. Nuka and Ogunola, for example, suggest a . This involves continuously improving statistical models by identifying and correcting biases in training data in order to reduce disparities in treatment between social groups.

Beyond technical solutions, regulatory frameworks have recently been put in place to ensure the transparency and fairness of algorithms in sensitive sectors such as finance. Canada's and Europe's are examples of this. The latter, adopted in 2024 and implemented gradually, imposes strict requirements on high-risk AI systems, such as those used for granting credit.

sets out transparency requirements to ensure that systems are auditable and that their decisions can be understood by all stakeholders. The aim is to prevent algorithmic discrimination and ensure ethical and fair use. Financial regulators also have a crucial role to play in ensuring compliance with fair competition rules and guaranteeing prudent and transparent practices in the interests of financial stability and customer protection.

However, to slow down the adoption of strict standards poses a significant risk: the lack of regulation in some countries and difficulties in enforcement in others could encourage opacity, to the detriment of the most vulnerable citizens.

Provided by The Conversation

This article is republished from under a Creative Commons license. Read the .The Conversation

Citation: AI and credit: How can we keep machines from reproducing social biases? (2025, September 23) retrieved 14 October 2025 from /news/2025-09-ai-credit-machines-social-biases.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Data privacy push sparks tech surge in US banks

0 shares

Feedback to editors