Tech News

Bias isn’t the only problem with credit scores — and no, AI can’t help it

[ad_1]

But the largest study of real-world mortgage data, Economists Laura Blattner of Stanford University and Scott Nelson of the University of Chicago show that the difference in mortgage acceptance between minority and majority groups is not only bias, but that minority and low-income groups have less data on credit. histories.

This means that when this data is used to calculate the credit score and that credit score is used to make a prediction about the loan priority, then that forecast will not be as accurate. This lack of precision leads to differences, not just bias.

The consequences are dire: more equitable algorithms will not solve the problem.

“It’s a really striking result,” says Ashesh Rambachan, who is studying machine learning and economics at Harvard University, but was not involved in the research. Bias and credit records have been hot issues for a long time, but this is the first large-scale experiment to examine the loan applications of millions of real people.

Credit scores include a wide range of socioeconomic data, such as employment history, financial records, and shopping habits, in a single number. In addition to deciding on loan applications, credit scores are used to make many life-changing decisions, including decisions about insurance, hiring, and housing.

To find out why mortgage lenders treat minority and majority groups differently, Blattner and Nelson compiled 50 million credit reports for anonymous U.S. consumers, each with socioeconomic details taken from a set of marketing data, their deeds of ownership, and mortgage deeds. , and provided data on the lenders who provided the loans.

The first study of its kind is that these data sets are proprietary and not publicly available to researchers. “We went to a credit bureau and we basically had to pay a lot of money for that,” Blattner says.

Noisy data

They then experimented with different predictive algorithms, stating that credit scores were not only biased but “noisy,” a statistical term for data that could not be used to make accurate predictions. Take a minimum applicant with 620 credit. In a biased system, we expect this score to always be excessive for that applicant’s risk and a more accurate score would be 625, e.g. In theory, this bias could be counted through an affirmative action algorithm, such as lowering the acceptance threshold for minority applications.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button