Technical Papers

Evaluating AI fairness in credit scoring with the BRIO tool

Greta Coraglia, Francesco A. Genco, Pellegrino Piantadosi, Enrico Bagli, Pietro Giuffrida, Davide Posillipo, and Giuseppe Primiero

ABSTRACT of Evaluating AI fairness in credit scoring with the BRIO tool

We present a method for quantitative, in-depth analyses of fairness issues in AI systems with an application to credit scoring. To this aim we use BRIO, a tool for the evaluation of AI systems with respect to social unfairness and, more in general, ethically undesirable behaviours.

It features a model-agnostic bias detection module, presented in [CDG+23], to which a full-fledged unfairness risk evaluation module is added. As a case study, we focus on the context of credit scoring, analysing the UCI German Credit Dataset [Hof94a].

We apply the BRIO fairness metrics to several, socially sensitive attributes featured in the German Credit Dataset, quantifying fairness across various demographic segments, with the aim of identifying potential sources of bias and discrimination in a credit scoring model. We conclude by combining our results with a revenue analysis.

Introduction to Evaluating AI fairness in credit scoring with the BRIO tool

In recent years, the integration of Artificial Intelligence (AI) into various domains has brought forth transformative changes, especially in areas involving decision-making processes. One such domain where AI holds significant promise and scrutiny is credit scoring.

Traditionally, credit scoring algorithms have been pivotal in determining individuals’ creditworthiness, thereby influencing access to financial services, housing, and employment opportunities. The adoption of AI in credit scoring offers the potential for enhanced accuracy and efficiency, leveraging vast datasets and complex predictive models [GP21]. Nevertheless, the inherently opaque nature of AI algorithms poses challenges in ensuring fairness, particularly concerning biases that may perpetuate or exacerbate societal inequalities. Fairness in credit scoring has become a paramount concern in the financial industry.

According to the the AI act and to the European Banking Authority guidelines—which state that “the model must ensure the protection of groups against (direct or indirect) discrimination” [Eur20]—ensuring fairness and the prevention/detection of bias is becoming imperative. Fairness is fundamental to maintaining trust in credit scoring systems and upholding principles of social justice and equality. Biases in credit scoring algorithms can stem from various sources, including historical data, algorithmic design, and decision-making processes, thus necessitating the development of robust fairness metrics and frameworks to mitigate these disparities [Fer23, BCEP22, NOC+21].

Various metrics have been proposed to evaluate the fairness of credit scoring algorithms, encompassing disparate impact analysis, demographic parity, and equal opportunity criteria: disparate impact analysis examines whether the outcomes of the algorithm disproportionately impact protected groups; demographic parity ensures that decision outcomes are independent of demographic characteristics such as race, gender, or age; equal opportunity criteria focus on ensuring that individuals have an equal chance of being classified correctly by the algorithm, irrespective of their demographic attributes. Still, several challenges persist in implementing fair algorithms. One key challenge is the trade-off between fairness and predictive accuracy, as optimizing for one may inadvertently compromise the other. Moreover, biases inherent in training data, algorithmic design, and decision-making processes can perpetuate unfair outcomes, necessitating careful consideration and mitigation strategies.

The literature on fairness detection and mitigation in credit scoring has seen significant advancements, with researchers proposing various methods to address biases and promote equitable outcomes [HPS16, FFM+15, ZVRG17, LSL+17, DOBD+20, BG24]. Hardt et al. [HPS16] examine fairness in the FICO score dataset, considering race and creditworthiness as sensitive attributes. They employ statistical parity and equality of odds as fairness metrics to assess disparities in credit scoring outcomes across demographic groups. In [FFM+15], Feldman et al. propose a fairness mitigation method based on dataset repair to reduce disparate impact, applying it to the German credit dataset [Hof94b].

They focus on age as the sensitive attribute and employ techniques to adjust the dataset to mitigate biases in credit scoring outcomes. Zafar et al. [ZVRG17] introduce a regularization method for the loss function of credit scoring models to mitigate unfairness with respect to customer age in a bank deposit dataset.

Their approach aims to prevent discriminatory outcomes by penalizing unfair predictions based on sensitive attributes. In [LSL+17] the authors propose the implementation of a variational fair autoencoder to address unfairness in gender classification within the German dataset. Their approach leverages generative modeling techniques to learn fair representations of data and mitigate gender-based biases in credit scoring. In [DOBD+20], Donini et al. Analyze another regularization method aimed at minimizing differences in equal opportunity within the German credit ranking. Their empirical analysis highlights the effectiveness of regularization techniques in promoting fairness and equity in credit scoring outcomes. Most recently, the work in [BG24] combines traditional group fairness metrics with Shapley values, though they admittedly may lead to false interpretations (cf. [AB22]) and should thus combined with counterfactual approaches.

While the existing tools and studies present different fairness analyses and bias mitigation methods, to the best of our knowledge none of them enables the user to conduct an overall analysis yielding a combined and aggregated measure of the fairness violation risk related to all sensitive features selected. Moreover, our approach is model-agnostic — while many others are not — while still allowing for bias mitigation considerations to be done.

We offer such a result using BRIO, a bias detection and risk assessment tool for ML and DL systems, presented in [CDG+23] and based on formal analyses introduced in [DP21, PD22, GP23, DGP24]. In the present paper, we showcase its use on the UCI German Credit Dataset [Hof94a] and present an encompassing, rigorous analysis of fairness issues within the context of credit scoring, aligning with the recent ethical guidelines. To operationalize these principles, we measure the fairness metrics over the sensitive attributes present in the German Credit Dataset, quantifying and evaluating fairness across various demographic segments, thereby seeking to identify potential sources of bias and discrimination.

The rest of this paper is structured as follows. In Section 2 we provide a preliminary illustration of the dataset under investigation, the features considered and the performance. In Section 3 we explain how we constructed a ML model trained on the dataset considered for credit score prediction, its evaluation and validation and the results on score distribution. In Section 4 we illustrate the theory behind bias identification and risk evaluation of BRIO. In Section 6 we present the results of risk evaluation on the UCI German Credit Dataset using BRIO. We conclude in Section 8 with further research lines.

ARE YOU A DEVELOPER?

Check out all the resources for TPPs and developers on the Crif Platform development portal.

REQUEST YOUR FREE COPY

PRIVACY POLICY PURSUANT TO ART. 13 OF EU REGULATION 679/2016 (“GDPR”)

In accordance with the legislation in force on the protection of personal data, CRIF S.p.A., located at Via Fantin 1-3, 40131 Bologna, Italy, VAT No. 02083271201 (“CRIF”), as the Controller for the processing of your personal data, must provide you with certain information concerning the use of such data. 1 – Purpose of the processing of personal data and lawful basis of the processing 1.1 – Purpose and lawful basis of the processing Your personal data is processed by CRIF for the following purposes: a) for the purpose of fulfilling contact requests. Lawfulness of processing: art. 6(1)(b) of the GDPR. b) for marketing and/or information purposes, as well as market analysis and initiatives related to CRIF activities, including via automated calling systems (e.g., SMS, MMS, e-mail, fax). Lawfulness of processing: art. 6(1)(a) of the GDPR. c) purpose of sharing/transferring your data with/to CRIF Group companies (refer to link https://www.crif.it/chi-siamo/la-nostra-presenza-globale/ to fulfill contact requests. Lawfulness of processing: art. 6(1)(b) of the GDPR. The provision of personal data for the purposes referred to in point (b) is optional, and the related processing requires the consent of the data subject; any refusal to provide consent will not give rise to any consequences. The provision of data for the purposes referred to in points (a) and (c) is necessary and does not require consent. The user is free to not provide this information, but in this case we will not be able to fulfill your requests. After the initial telephone/e-mail contact, if the user decides not to subscribe to any service or to purchase any product or states that he/she does not want to be contacted again, the Controller will cancel the user’s details. Likewise, users can decide not to receive any marketing communications at any time by using the opt-out link at the bottom of each message and in any case exercising the relative right to withdraw consent. Any other processing for different purposes is excluded. 2 - Retention times 2.1 We hereby inform you that your personal data will be processed and retained for up to 5 years or in any case until you withdraw your consent. In this regard, you can withdraw consent for the processing of personal data for the purposes described in point 1.1 (b) at any time by e-mailing: dirprivacy@crif.com. 3 – Methods of data processing 3.1 Data processing is carried out using manual, computerized and ICT tools according to methods strictly related to the purposes themselves and, in any case, in a way that guarantees the confidentiality and security of the data. 4 – Categories of subjects to which personal data can be communicated or who may become aware of such data 4.1 – To achieve the purposes described in point 1.1 “Purpose and lawful basis of the processing” of this Privacy Policy, CRIF may communicate your personal data to third parties belonging to the following categories: a) personnel authorized to perform the processing, or third-party subjects appointed as processors; b) CRIF Group companies, including outside the European Union, which will act as independent controllers and will provide their own privacy notice in accordance with art. 14 of the GDPR. 5 – Transfer of data outside the European Union 5.1 To achieve the purposes described in point 1.1 letter c) “Purpose and lawful basis of the processing” of this Privacy Policy, CRIF may also communicate your personal data to CRIF Group companies based outside the European Economic Area. 5.2 The above transfer may be put in place, without specific authorizations, if the third country to which the data is transferred falls under those which guarantee an adequate level of protection according to the European Commission. In the absence of such an adequacy decision adopted by the European Commission, this transfer to recipients located in third countries can be carried out by adopting and documenting the sufficient guarantees referred to in art. 46 of the GDPR. In the absence of an adequacy decision or additional guarantees, the transfer of personal data to recipients located in third countries can be carried out if the terms are met and the additional conditions set out by Chapter V of the GDPR exist, including the possibility to make use of the derogations for specific situations in art. 49 of the GDPR. 5.3 A list of countries where CRIF Group companies operate is available at: https://www.crif.it/chi-siamo/la-nostra-presenza-globale/ 6 - Data Subject rights 6.1 According to Chapter III of the GDPR, as the Data Subject, you have the right to (i) obtain confirmation of whether personal data relating to you is being processed, obtaining the information listed in article 15 of the Regulation; (ii) obtain rectification of inaccurate personal data regarding you or to have incomplete personal data completed; (iii) obtain deletion of personal data regarding you, pursuant to and with the limitations set out in article 17 of the Regulation; (iv) obtain the restriction of processing of your personal data, in the cases specified in article 18 of the Regulation; (v) receive the personal data concerning you in a structured and machine-readable format, in the cases specified in article 20 of the Regulation; (vi) oppose the processing of personal data pursuant to and with the limitations set out in article 21 of the Regulation, even only for automated contact; and (vii) withdraw consent at any time, without prejudice to the lawfulness of the processing based on the consent given prior to the withdrawal. 7 - Controller 7.1 The Controller responsible for the processing of personal data is CRIF S.p.A., Via Mario Fantin 1‐3, 40131 Bologna, Italy, VAT No. 02083271201. A complete list of Processors is available from the Controller’s head office. The following methods can be used to exercise the rights set out in Chapter III of the GDPR: - e-mail sent to the address: dirprivacy@crif.com; - certified e-mail sent to the address: crif@pec.crif.com 7.2 You can also submit a complaint to the Italian Data Protection Authority, following the instructions via the link: http://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/4535524. 8 – Data Protection Officer 8.1 For any questions regarding the processing of your personal data, you can contact the Data Protection Officer at: e-mail: dirprivacy@crif.com: Certified e-mail: crif@pec.crif.com.