A.I. decisioning audit structure launched by housing group
The National Fair Housing Alliance created a framework designed to audit algorithmic decision making tools, which in the mortgage industry includes the use of credit scores, automated underwriting and risk-based pricing.
Named Purpose, Process and Monitoring, the process laid out by NFHA is meant to examine the lifecycle of one of these models, starting with predevelopment, and through development and post-development, in order to assess fairness in automated decision making.
Potential users are regulators, businesses, researchers, civil rights groups and other stakeholders who are seeking to identify the underlying assumptions and limitations of these tools. The PPM will then produce appropriate recommendations to mitigate consumer fairness and privacy risks.
“NFHA’s PPM framework provides an equity-centered auditing solution at a time when policymakers and civil rights organizations are calling for fairness, accountability, transparency, explainability and interpretability,” Michael Akinwumi, the organization’s chief tech equity officer, said in a press release. “The framework is system-oriented, and it covers every decision. It will drastically limit the capacity of models to harm consumers if embraced by the industry.”
A study conducted last year by The Markup claimed racial bias existed in the use of artificial intelligence in mortgage underwriting.
Separately, potential bias in technology was one of the reasons behind the Federal Home Loan Bank San Francisco teaming up with the Urban Institute last year to create a housing equity incubator.
“This framework advances from earlier work NFHA has done to inform policies around the use of biometric technologies, identifying and managing bias in artificial intelligence and financial institutions’ use of AI and machine learning, as well as standards for federal regulatory policy,” said Snigdha Sharma, the organization’s tech equity analyst.
It also builds on existing resources such as Cross-Industry Standard Process for Data Mining, the Model Risk Management supervisory guidance from the Federal Reserve System and the Office of the Comptroller of the Currency, and the National Institute of Standard’s proposal for identifying and managing bias in artificial intelligence.
VantageScore — one of the two providers of credit scoring models used by lenders, which is also fighting to gain approval by the government-sponsored enterprises — said in a statement that it supports initiatives like this that seek to end discrimination in housing.
“In particular, it is critical to move away from certain legacy processes and certain scoring models that have been proven to ignore underserved borrowers and communities of color,” the statement from VantageScore President and CEO Silvio Tavares said. “We have led the way in producing fairer credit outcomes by incorporating alternative data and novel analytics techniques.”
The Federal Housing Finance Agency scheduled a public listening session on March 1 for a potential transition for Fannie Mae and Freddie Mac to allow use of a new credit score or scores, pursuant to the Validation and Approval of Credit Score Models Rule.
It includes discussion of the adoption of one of four options, ranging from keeping the current one score system, requiring multiple scores, having the lender make the choice or creating a waterfall system.
As a follow-up, the FHFA wants input on the fair lending and access-to-credit impacts of these credit score options.
Comments are closed.