FinTech Meetup - Fairplay
[00:00:00] Rich: Good morning. This is Rich Alterman with GDS, The Lending Link broadcasting live this morning from FinTech Meetup in Las Vegas. Pleased to be shared by my guest today, Kareem Saleh, who is the founder and CEO of Fairplay, the world's first fairness as a service company. In fact, if you listen to our podcasts, you would know that we actually met with. Kareem back in December, 2022, why don't we first start by giving you an opportunity to talk about your company a little bit, and then we'll get into more detail.
[00:00:27] Kareem: Thanks for having me, Rich. Fairplay is the world's first fairness as a service company. We make software that allows anybody using a predictive model to answer five questions: Is my algorithm fair? If not, why not? Could it be fairer? What's the economic impact to our business of being fairer? And finally, did we give our declines, the folks we rejected, a second look to make sure we didn't say no to somebody we ought to have approved? Some of the biggest names in American finance use our software to automate their fair lending testing and reporting and to run second look programs designed to find more good loans within their risk tolerance that also yield an inclusion dividend. Some of our partners have been able to increase approval rates on the order of 10%, increase take rates on the order of about 13%, and increase fairness to protected groups on the order of 20%. So, we like to say, fairness is good for profits, good for people, good for progress.
[00:01:24] Rich: Well, thanks. So, one of the key terms we, we've batted around is algorithmic bias. In layman's terms, let's get into what that exactly means.
[00:01:34] Kareem: There are many different potential definitions of fairness in the world. Many of them conflict with one another. Those of us who work in financial services have the benefit of having a kind of regulatorily defined definition of fairness. Typically, the first definition of fairness that courts and regulators in the financial services space apply is something called the Adverse Impact Ratio. That's a measure of fairness that asks at what rate does one group experience a positive outcome like approval for the loan relative to another group. So for example, at what rate are women approved for loans relative to men? At what rate are Hispanic applicants approved relative to white applicants?
[00:02:13] Rich: One of the evolutions they've been going through over the last five plus years, certainly as a use of machine learning models in the underwriting process. And, it's probably being surrounded by some controversy. Can you really tell me why I was declined? Why I was approved? I think we've gotten better and there's been more adoption and acceptance, certainly by CFPB and whatnot. When we think about traditional models and we think about machine learning models, we think about your solution, are there any insights on how your platform has to treat those two things differently?
[00:02:48] Kareem: I think that we labored in financial services for many years under the misconception that simpler models, logistic regression-based models were inherently explainable because you could just look at the coefficients on the variables and understand what variable is driving differences and outcomes for protected groups. I think that's a canard for reasons I'll explain in a moment, but kind of potential for unfairness is much greater with machine learning models that can consume much larger sums of data and have used computational methodologies that are much more complex. So, unlike a logistic regression, no human can look at a machine learning model and discern why it approved someone or declined someone. Now, the truth of the matter is, is that even the seemingly fair variables that we were using during the era of logistic regression can encode information about protected status in all kinds of ways that no human could possibly discern, and allow me to give you an example of that. Imagine we were once upon a time working with an auto lender. And, that auto lender actually was based here in Nevada. And they had two variables that they were using in their model. The first variable they were using was, is the borrower based in Nevada? And the reason they were asking that question is it turns out that, like, Nevadans had a slightly higher delinquency rate. And I'm not sure why that is. Maybe Nevadans are more risk seeking than other Americans. Maybe it's the long history of gaming. In any event, states are big. So, are you a resident of Nevada? It doesn't really pose any redlining concerns. So that was a legitimate variable for use. This is a used car lender. So, another variable that they had in their model was what is the mileage of the car that is proposed to be purchased? And if you think about it, that's an intuitive variable from a credit risk perspective too, right? Because the higher the mileage on the car, the more likely it is to break down, right? More likely it is to break down, the more likely you ought to miss work and more likely you ought to miss a payment. So, you have these two variables, both of which pass compliance and are seemingly totally legitimate for use in an auto underwriting model. There's just one problem. If you're based in Nevada and you're buying a high mileage car, there's like a 70 percent probability that you're a person of color. And, it doesn't matter if you're using a logistic regression or a machine learning model. Those two seemingly fair variables interact in ways to encode information about protected status that no human could possibly dicern.
[00:05:13] Rich: Interesting. So, as I mentioned, we were together 16 months ago. Just want to talk about your progress and adoption. You made some comments at the beginning, but I mean, we've been here 16 months. What's the story?
[00:05:28] Kareem: Well, I think fairness issues are moving up the agenda. Driven both by, I think the way that chat GPT has captured the public's imagination and forced some of these questions of algorithmic bias and hallucination into the zeitgeist. Fairness is also high on the agenda at the regulators. So we've seen. Several consent orders in the banking as a service ecosystem, also among other big banks that are dinging lenders for fair lending compliance violations and requiring ongoing fair lending testing and monitoring. You know, historically, fair lending compliance was kind of done episodically, annually, typically as a retrospective. So, lenders would go out and make a bunch of loans and then look back a year later to see if there were disparities in their decisions. And what we're hearing now, both from the regulators, but I think also increasingly the consumers who represent the future of the financial services business. That, we expect this fairness stuff to be taken very seriously. We expect it to be inquired into rigorously. And when you find problems, which we sometimes do for those fairness and bias issues to be remediated. So I think apart from anything that's happening in the regulatory sphere, consumers are increasingly demanding fairness from their governments, from their employers, and from the brands that they patronize, and that's as true in financial services as it is in consumer goods.
[00:06:50] Rich: So, you talked about predictive models, but certainly lenders not only are using models, but then they have policy rules, credit strategies. So how does your platform also address the non-scoring side of business?
[00:07:02] Kareem: That's a great question. We look at every step in the customer journey and the customer journey holistically. If there are credit policies or other hard cuts, for example, with respect to debt service coverage ratios or debt to income ratios, we take a look at how all of those rules can affect your prequalification pool, can affect who responds to your marketing can affect your fraud detection decisions. All, all of these steps, all of these high-stakes decisions that get made upstream of the approved deny underwriting decision. So, we think it's really important not just to take a model based approach, but really to look at the entirety of the customer journey of which rules and policies, in addition to models, are an important part.
[00:07:45] Rich: We know that a lot of lenders have embraced open banking, the use of cash flow analysis. Any implications there from a biased perspective that you guys have had to take a look at?
[00:07:56] Kareem: We have been surprised, but actually if you think about it, it sort of makes sense to find that all our customers who use cashflow underwriting ends up being the fairest form of underwriting we've ever seen. We think it's probably closest to the customer's balance sheet of any of the data elements that are available at scale to make underwriting decisions. What we find is when you've got that kind of a granular look into what is the consumer's real ability to pay, it allows you to parse the risk much more finely and as a result to treat customers much more fairly.
[00:08:32] Rich: Interesting. I know that you and Patrick and his sales team have their hands full focused on originations, but certainly, scoring is taking place across the full credit lifecycle of customers, be it behavioral scoring, for account monitoring, collections, for doing, probability of payment, settlements, what not. Is fairness an issue that lenders should be thinking about not only in originations, but across the whole credit spectrum?
[00:08:59] Kareem: What we're finding is that the number of decisions that are being evaluated for fairness is growing. It used to be historically, you were looking at the underwriting decision, and the pricing decision, but now we see you're looking at the fairness of your marketing is being questioned, right? Like, is your applicant pool representative of the communities you serve the fairness of the fraud decisions? Are you disproportionately denying group or another at the fraud detection stage? Income verification decisions, early charge off models, underwriting and pricing, line assignment. And all kinds of account management decisions like claims administration in insurance, line size adjustments, and if necessary, loss mitigation decisions.
[00:09:43] Rich: Interesting. So, one of the things we talked about back in December 2022, as you are really evolving was one of the challenges that you could come across are a little bit of a head in the sand approach that lenders may say, well, I don't want to know what I don't know. As you say, it's on the radar now. Have you found that really not something you're having to address anymore?
[00:10:06] Kareem: Yeah, I think it is increasingly untenable for folks to say, I don't want to know, I'm afraid to look, let's sweep it under the rug. As I was saying before, increasingly, I think the expectation of the public is that if you're going to use alternative data, if you're going to use predictive models, that you've got to grapple with the very real tendency of those systems towards bias. Let's be honest, some of the data that is readily available in financial services reflects some of the biases and the unfairness of the past. I think there is increasingly an expectation on the part of the public. Also regulators, that say we're going to take steps to prevent the unfairness of the past from being encoded into the digital decisions that govern our futures.
[00:10:50] Rich: Do you see any opportunities for partnerships with data providers themselves, where you could help them almost add like a UL stamp of approval? That their data and the way they deliver it back to a lender doesn't have some disparate impact, disparate treatment concerns.
[00:11:07] Kareem: Yeah, we are fortunate to be working with several data providers now, both to kind of, establish the fairness of the inputs. In many cases, those data providers also may build models or scores, and to de-bias those models and scores. What we're finding is that it used to be historically just the lenders who were kind of on the hook for the fairness decision. What we're seeing is the kinds of institutions that have fairness responsibilities is growing to include data vendors, but also kind of like nontraditional financial institutions. So we work with a really big mobile network operator. Most people don't realize, that the telcos are themselves also massive finance companies. In many cases, they're financing the devices that get sold in their stores. And they want to make sure that those device sales are also conducted fairly. Seeing the number of decisions that have to be made are fairly growing, and the kinds of institutions that have fairness obligations is also growing.
[00:12:04] Rich: Could you elaborate a little more on the second look side of this and maybe give a real life example of how a lender benefited from that type of service?
[00:12:14] Kareem: I'll give you one example. A variable that we encounter all the time in credit models is consistency of employment. If you think about it, consistency of employment is a perfectly reasonable variable on which to assess the credit worthiness of a man. But all things being equal consistency of employment is going to have a disparity driving effect for women who take time out of the workforce to raise a family, care for a loved one, etc. And so what second look allows us to do is say, hey before you decline someone for inconsistent employment, maybe you want to check to see if they resemble good applicants on other dimensions you didn't heavily consider. What we find is it's something like 25 to 33 percent of the highest scoring black, brown, and female folks that get declined. Would have performed as well as the riskiest folks at most lender for lenders approved. So that second look, calibrating the model to be more sensitive to these populations that are not well represented in the data or may have unique credit performance characteristics ends up helping you find more good loans within your risk tolerance that also yield an inclusion dividend.
[00:13:18] Rich: Great. Any future enhancements you want to share with the audience?
[00:13:23] Kareem: We are developing a compliance copilot so that chief compliance officers at financial institutions who have questions about fair lending, who have questions about model risk management, have a fair lending buddy, that can help kind of navigate this increasingly difficult thicket of AI, big data and bias in consumer credit and small business underwriting. So is that copilot a human or is it a machine? It's a machine. And because we're responsible technologists, we're in the middle of stress testing that machine to make sure that it serves its users appropriately.
[00:13:56] Rich: Okay, good. We're here in Vegas and, if we took your model, would we find that certain, different, gambling. Machines out there have some bias in them?
[00:14:05] Kareem: I don't know. I think we should go out and test that proposition. I haven't hit the slots yet, but maybe we can go check them out together.
[00:14:12] Rich: Well, I want to thank my friend Kareem Saleh for joining us this morning for our episode of the Lending Link, which is being broadcast live from FinTech Meetup in Las Vegas. Once again, Kareem is the founder and CEO of Fairplay, the world's first fairness as a service company. And I also thank you for our partnership with GDS Link.
[00:14:31] Kareem: Thanks Rich.