How AI Will Revolutionize Finance

May 25, 2018

ETF.com: Does that mean in 10 or 20 years, AI will be putting us all out of work, freeing us up to become the new version of "web designer" or "yoga teacher"?

Kelly: Maybe we don't need as many financial consultants or whatever as we used to. But we do need some, and they may be paid better than they were before. It's like, every little town needs a Chinese restaurant, but you don't need 100 of them.

But will they be gone completely? Hmm. Take typists. They're gone because we're all typists now. There are more typists than ever now—they just stopped being paid. And we all pump our own gas. We all became bank tellers, with the advent of ATMs. And so on. So maybe the work will shift, and the average user will take on more and more of it.

Some tasks are just going away. Some tasks will move to the consumer. And some tasks will be done by a machine. But I don't see any end to people's need to be guided and advised about money, or assisted and coached in grappling with it.

ETF.com: Does AI simplify the financial industry for the humans who are left?

Kelly: There aren't too many things that I'm certain of, but I am very certain that finance and money will become more complicated than they are now. That's the nature of technology: It increases in complexity. It's like taxation: Tax law will never get simpler; it’ll always become more complicated.

There’s a model for working with AIs called "the centaur." The best chess player in the world today is not AI. It's a human and AI team—a centaur. The best dietician isn't AI and it's not a human doctor; it's the team of AI and human doctors. So it'll likely be the same for the financial world: The best advisors will be the ones working with AI to advise their clients.

Finance is such a large sector, though, that I think there's going to be plenty of room for people to find a different position to navigate others through this complexity.

ETF.com: One of the pitfalls of artificial intelligence is that it has the potential to bake in bias, whether it's bias from historical data or from the AI's programmers. Are these human/machine "centaurs" how we overcome that bias?

Kelly: You're absolutely right. Every AI—and, by the way, every human—is biased. Every system is biased. You can't make unbiased systems, just as we can't make unbiased humans. The only remedy is to counter the bias; part of that is to identify what the bias is.

This challenge of identifying biases will become a thing unto itself. It's also part of the ethical aspects of AI, when AI arrives at decisions where it can't explain its reasoning. Sometimes these decisions are life-and-death kinds, say, parole or whether or not a person gets a mortgage.

There's a field now called "explainable AI," which is basically a second AI to examine the first AI and extract how it came to its conclusions.

ETF.com: Like a nested system of computers, each checking each other's work?

Kelly: Basically. There will be systems whose job is to identify biases. Then there will be people—and maybe these are some of the new occupations—whose jobs are to balance those biases. I think it's both a problem and an opportunity for lots of businesses, similar to, say, fulfilling compliance requirements.

ETF.com: If everything has AI in it, will the concept of individual privacy as we know it become extinct?

Kelly: I think what might become extinct is our term "privacy," which is a crude term that, even at this moment, is not present in the way we think it is. It's also relatively recent, in terms of the evolution of the species. For most of human history, we all were fully aware of what everybody else in our clan was doing all the time.

In my book “The Inevitable,” I talk about "co-veillance," this idea of us being comfortable with mutual surveillance, where I watch you and you watch me. If we get some benefit from it, that feels OK. Part of what we're uncomfortable with right now, though, is that it doesn't seem symmetrical. Like with the Cambridge Analytica/Facebook scandal, we feel that they're watching us, but we don't know what they're doing with it, and we have no way to hold them accountable. We get no benefit.

I don't see any retreat from the fact that more and more of our lives will be tracked somehow. Where I differ from most critics, though, is that I believe the solution to problems created by technology is not less technology, but more.

Privacy issues aren't going to be solved by giving up Facebook. They're going to be remedied by new technologies: AI from bots or companies we hire, or new regulations and software we install to remedy it. But those things will have their own new problems, which we’ll then have to solve. But we're just not going to retreat from it altogether.

Contact Lara Crigger at [email protected]

Find your next ETF

Reset All