web3sean.eth

Posted on Dec 07, 2021Read on Mirror.xyz

Fintechs > Incumbents? (Part 2)

In part I, I discussed broadly why fintech startups were so disruptive for existing financial services companies. If you’ve somehow stumbled onto part II without having read part I, I would encourage you to take a few minutes to check that out (tldr: in the financial services industry, more customers do not equal more profits).

So, with this understanding the question then becomes – how do startups identify & poach the best customers from existing financial services companies?

I would argue there are two primary methods to do this effectively:

1)      Develop new data sources

2)      Appeal to human psychology and behavior

Develop new data sources

Generating new data sources allows startups to price, attract and retain customers in a better or more cost-effective manner. How do you develop new data sources? The short answer is machine learning. Fintechs collect every kind of attribute and data point for every user to continue building data sets that allow their computer algorithms to decipher patterns or correlations that would otherwise be unrecognizable to humans. Is the fact that you own a hamster and took more than 25 minutes to fill out the application an indicator that you’re likely to default? Maybe. That’s a simplistic interpretation of what’s really going on, but the idea is that correlations between default/not default exist within all of this data that a human wouldn’t be able to determine on their own (i.e. loan officers). There may be correlations based on how long certain fields take to fill out, whether users use only lowercase letters or the order in which specific fields are completed. If those algorithms can more effectively classify borrowers as likely to default vs not, then these companies underwriting models allow them to better price customers.

The glaring problem with this is the issue of adverse impact. If the underwriting models determine that owning a hamster is a great leading indicator of defaults, but the company’s rejected-applicant pool has a disproportionate racial/gender/etc profile, then fair lending laws say that the underwriting model is having an adverse impact. That’s to say that regardless of what is driving the approve/reject decision, the law assumes you’re discriminating against some protected class based on the outputs of your decisioning. That matters because it’s illegal in the United States. Clearly a big deal. There are deficiencies, in my opinion, with how United States law looks at short-term lending – it’s mostly vilified as payday lending preying on unsuspecting consumers – but that’s a separate, albeit related discussion. Suffice it to say that developing new data sources to improve pricing & retention of customers is the bare minimum for fintech startups – without generating innovative insights into the collected data that improve underwriting models, these companies won’t be sustainable long-term.

Appeal to human psychology & behavior

It’s much easier to talk about the benefits of new data sources and draw sweeping conclusions that improvements in underwriting are directly attributable to just data. But that line of thinking has gotten technology companies into trouble in the past and fails to account for human behavior throughout the process. Despite what some may have you believe, every loan is not just a collection of data points – there’s a person behind it who likely thinks about that loan quite often. How borrowers enter these startup’s origination funnel, and the actions they take throughout the funnel can provide important insight into an overlooked piece of the puzzle: willingness to pay.

Let’s come back to the SoFi example from part I. If I’m a HENRY borrower with an outstanding student loan, SoFi was essentially telling me – “hey, you’re getting charged the same rate as everyone else even though you have much higher earning potential and a credit score 150 points higher than the average borrower”. In simpler terms, SoFi’s marketing message was – “you’re getting screwed, come to us we’ll price you fairly”. Now, obviously SoFi isn’t able to price these borrowers better (over the long-term) without a sophisticated underwriting model that leverages the data sources noted above, but those aren’t enough to attract & retain customers.

What else from this SoFi example is important as it relates to psychology and human behavior? The fact that the initial product was a refinance product. Right off the bat you’re incorporating positive selection into the customer segment; no customer ready to default is going to be interested in refinancing their student loan, that’s just not practical. So, you’re already starting from a customer base that has high *willingness to pay *and that is financially fluent enough to understand the benefits of refinancing this debt to a lower rate.

If we think about the counter to this it becomes even more evident. Let’s take a related pocket of financial services: insurance. Say a new startup comes out with the following messaging – “life insurance plans are too expensive and cumbersome to get, we’ll underwrite new borrowers in 10 minutes or less without requiring blood tests”. Who does that messaging attract & appeal to? Not the healthy ones who would likely prefer much more stringent testing and data collection to prove their above-average health & thus receive less expensive insurance policies. Contrast that to those on their deathbed who would love to receive an insurance policy that doesn’t require blood tests. The point is, how these fintech startups appeal to human psychology matters: to better price, attract & retain customers they must build products that appeal to the tail end (good tail) of an incumbent’s normal distribution customer set.

The next order of human behavior development for fintech companies is – how can we either make this good customer segment even better, or how can we pull some of the fat body of the customer distribution (assume normal distribution) pool into the good tail? One area you’ve likely seen this is with auto insurance providers giving discounts or cash-back for safe driving. A customer has gone 12 months without any incident and so the insurance provider sends a check to effectively reward her for good driving. The idea here being that by receiving this check, she now feels more incentivized to drive carefully in anticipation of receiving a future check and so her behavior actually changes/improves. The problem with this is that it’s not necessarily a reflection on her driving skills or how safely/dangerously she drives; she could have driven like a maniac and just lucked out by not having any incidents. What upstart auto insurance companies are introducing is continuous underwriting – going forward her speedometer is monitored, where she’s driving, what time of day/night, highway vs city, etc. This dynamic model not only benefits safe drivers (i.e. the good customers) by rewarding their driving, but it also serves to deter bad actors – let’s say I drive overly cautious because I know it will get me a good rate, but then once I have that great rate I decide to drive like a maniac. The continuous underwriting mitigates these bad actors because it immediately re-underwrites them based on their new driving habits. The idea here being, the especially adept fintech companies who can leverage data sources to encourage good behavior, improve existing behavior & mitigate bad behavior have real advantages that existing financial services companies lack. It’s not that the existing companies don’t have the technology or skills to improve, but it’s a lot more difficult to re-segment the massive customer bases they serve, compared with upstart companies who can make consumers feel like they’re treated better and are actually incentivized more to act better, thus becoming even better customers.