Help Wanted: A Global Push Toward Algorithmic Fairness

April 27, 2021

A Q & A with Sonja Kelly of Women’s World Banking and Alex Rizzi of CFI, building on Women’s World Banking’s report and CFI’s report on algorithmic bias

It seems conversations around biased AI have been around for some time. Is it too late to address this?

Alex: It’s just the right time! While it may feel like global conversations around responsible tech have been going on for years, they haven’t been grounded squarely in our field. For instance, there hasn’t been widespread testing of debiasing tools in inclusive finance (though Sonja, we’re excited to hear about the results of your upcoming work on that front!) or mechanisms akin to credit guarantees to incentivize digital lenders to expand the pool of applicants their algorithms deem creditworthy. At the same time, there are a bunch of data protection frameworks being passed in emerging markets that are modeled from the European GDPR and give consumers data rights related to automated decisions, for example. These frameworks are very new and it’s still unclear whether and how they might bring more algorithmic accountability. So it’s absolutely not too late to address this issue.

Sonja: I completely agree that now is the time, Alex. Just a few weeks ago, we saw a request for information here in the U.S. for how financial service providers use artificial intelligence and machine learning. It’s clear there is an interest on the policymaking and regulatory side to better understand and address the challenges posed by these technologies, which makes it an ideal time for financial service providers to be proactive about guardrails to keep bias from algorithms. I also think that technology enables us to do much more about the issue of bias – we can actually turn algorithms around to audit and mitigate bias with very low effort. We now have both the motivation and the tools to be able to address this issue in a big way.

What are some of the most problematic trends that we are seeing that contribute to algorithmic bias?

Sonja: At the risk of being too broad, I think the biggest trend is lack of awareness. Like I said before, fixing algorithmic bias doesn’t have to be hard, but it does require everyone – at all levels and within all responsibilities – to understand and track progress on mitigating bias. The biggest red flag I saw in our interviews contributing to our report was when an executive said that bias isn’t an issue in their organization. My co-author Mehrdad Mirpourian and I found that bias is always an issue. It emerges from biased or unbalanced data, the code of the algorithm itself, or the final decision on who gets credit and who does not. No company can meet all definitions of fairness for all groups simultaneously. Admitting the possibility of bias costs nothing, and fixing it is not that difficult. Somehow it slips off the agenda, meaning we need to raise awareness so organizations take action.

Alex: One of the concepts we’ve been thinking a lot about is the idea of how digital data trails may reflect or further encode existing societal inequities. For instance, we know that women are less likely to own phones than men, and less likely to use mobile internet or certain apps; these differences create disparate data trails, and might not tell a provider the full story about a woman’s economic potential. And what about the myriad of other marginalized groups, whose disparate data trails are not clearly articulated?

Who else needs to be here in this conversation as we move forward?

Alex: For my colleague Alex Kessler and me, a huge take away from the exploratory work was that there are plenty of entry points to these conversations for non-data-scientists, and it’s crucial for a range of voices to be at the table. We originally had this notion that we needed to be fluent in the code-creation and machine learning models to contribute, but the conversations should be interdisciplinary and should reflect strong understanding of the contexts in which these algorithms are deployed.

Sonja: I love that. It’s exactly right. I would also like to see more media attention on this issue. We know from other industries that we can increase innovation by peer learning. If sharing both the promise and pitfalls of AI and machine learning becomes normal, we can learn from it. Media attention would help us get there.

What are immediate next steps here? What are you focused on changing tomorrow?

Sonja: When I share our report with external audiences, I first hear shock and concern about the very idea of using machines to make predications about people’s repayment behavior. But our technology-enabled future doesn’t have to look like a dystopian sci-fi novel. Technology can increase financial inclusion when deployed well. Our next step should be to start piloting and proof-testing approaches to mitigating algorithmic bias. Women’s World Banking is doing this over the next couple of years in partnership with the University of Zurich and data.org with a number of our Network members, and we’ll share our insights as we go along. Assembling some basic resources and proving what works will get us closer to fairness.

Alex: These are early days. We don’t expect there to be universal alignment on debiasing tools anytime soon, or best practices available on how to enforce data protection frameworks in emerging markets. Right now, it’s important to simply get this issue on the radar of those who are in a position to influence and engage with providers, regulators, and investors. Only with that awareness can we start to advance good practice, peer exchange, and capacity building.

Visit Women’s World Banking and CFI sites to stay up-to-date on algorithm bias and financial inclusion.