a green tree frog emerging through a green leaf.

AI at Money2020

All Blog Posts

I had the pleasure of hosting a Money2020 panel with Snighda Kumar, CEO of Brico (a Restive portfolio company), Kate Flocken, a Principal at FS Vector, and Brody Mulderig, an Executive Director at JPMorgan. We discussed the implications of agents (also known as action models or large action models)on the financial industry. These agents use LLM-based systems to take action on a user’s behalf. For example, a user could instruct an agent to pay his credit card bill, and the agent would use LLM-based reasoning to login to the creditcard site and process the payment from the consumer’s bank account.

Similar technologies have been deployed in the past based onscreen scraping, but these new LLM-based agents are much more sophisticated,far easier to deploy and use, and also far less predictable than older systems. We discussed many topics, including the use-cases for consumers,liability implications, privacy and transparency, and how these technologies could both enable and defend against scams and fraud.

The panel was unanimous in the view that these systems have much to offer consumers, from saving them time interacting with the financial system to improving customer service. We spent time discussing what would be needed to earn consumers’ trust and what the producers of these technologies would need to do to gain trust, including how to make these “black box” modelsmore transparent, which is one of the biggest challenges in deploying such tools. Snigdha pointed out that even “open source” models like LLaMA do not publicize their training data, making them more “open weights” than true opensource. Companies have been rushing to develop and deploy models, and explaining and documenting them has lagged.

Liability becomes incredibly challenging, because unlike traditional screen-scraping and API-based solutions, there are multiple different intermediaries none of which may have specifically programmed in the ultimate behavior. If consumer harm were to occur, a startup leveraging the LLM could point to the LLM’s having made an error, while the company that trained the LLM could disclaim responsibility for their model’s being directed in a way that they did not explicitly endorse. Meanwhile, financial institutions tend to be held liable any time consumer harm occurs, even if they didn’t approve of the activity in the first case.

Brody provided a framework through which to analyze banks’ expected willingness to allow interaction with these systems based on the size and reversibility of the transactions. Automating the payment of a credit card bill or an ACH to a known biller is much lower risk than initiating a wire transaction, for example. Ultimately, banks likely will allow or (attempt to)block these agents based on a complex risk calculus that is still developing.

The regulatory environment is even more unclear. Kate brought up the overarching concern around privacy and consumer knowledge, made even more difficult to regulate given the opaque nature of these products.There is a strong bias towards information transparency, but protecting against consumer harm (and determining the accountable party) will not be solved quickly.

Finally, we closed by discussing the reality that bad actors are already using these technologies. Brody agreed that banks would likely fast-track some of these LLM-based systems unlike their approach to other technological breakthroughs, because the only way to fight back against the scammers will be to use the same tools against them.

It was a great session that in some ways raised more questions than answers, but it confirmed that the next few years will be quite interesting in financial technology.

Tyler Griffin
Co-Founder & Managing Partner
Where founders build the future of financial services.

© 2024 Restive®, Inc.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.