top of page
Post: Blog2 Post

A.I. has a discrimination problem. In banking, the consequences can be severe

  • When it comes to banking and financial services, the problem of artificial intelligence amplifying existing human biases can be severe.

  • Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.

  • Lending is a prime example of where the risk of an AI system being biased against marginalized communities can rear its head, according to former Twitter executive Rumman Chowdhury.


Artificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.

Sadik Demiroz | Photodisc | Getty Images


AMSTERDAM — Artificial intelligence has a racial bias problem.

From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.


And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.





Recent Posts

See All

What are examples of bullying?

Bullying is a form of aggressive, potentially violent behavior. Bullying can occur at school, online, in the workplace, or at home. It can also be grounded in prejudice. Bullying is an aggressive beha

bottom of page