LONDON – Betterment, the U.S.-based online advisory and investment firm, has taken the ETF world by storm. The business has grown 400 percent every year since launch in 2008, and now manages over $1.7 billion assets for around 75,000 customers.
Founder and CEO Jonathan Stein is convinced this business model has equal potential in Europe, and that investors can sleep sounder at night with algorithm-based financial advice.
Stein will be giving a keynote speech at our annual conference in Amsterdam this year. To view the full agenda and register, click here.
ETF.com: Can you sum up how your business works and what makes it different?
Jonathan Stein: Most customers come to us through recommendation of a friend or family member and they tell us about their goals. Based on these goals and the time horizons of these goals we create portfolios for them. One might be for college, one might be for retirement and one to save for a house, and we manage each of these portfolios over time to minimise risk and cost to help customers stay on track for better investments.
We think we are unique in terms of our full service nature.
ETF.com: You talk about “giving advice”. In Europe this is a more contentious issue – you are not allowed to say you are giving advice unless you are a registered adviser, and there is a bit of a grey area when it comes to giving advice or guidance. How does it work in the U.S.?
Stein: We are comfortable saying that we are registered investment managers and we give advice. There is not the same tension in the U.S.; it is pretty broadly recognised that, in many cases, an algorithm will give more repeatable and transparent advice than a human can.
We see that tension, in fact, as a lobbying effect. Both in the UK and Japan, and maybe elsewhere, they are looking what’s going on in the U.S. and might review the situation as to what constitutes advice and who can give it.
ETF.com: Why is an algorithm better able to give advice?
Stein: An algorithm never sleeps. It’s always there for you and will give the same answer based on the same set of facts. An adviser might give one answer in the morning and another one in the afternoon. There are, for example, behavioural studies that show a courtroom judge might give a different verdict depending on whether it’s before or after lunch. Humans suffer from changing behavioural biases.
An algorithm has to be transparent. The rules are written for anyone to see. Anyone can read the code and it can be audited by a regulator. If there are any conflicts of interest or insider dealing or any such activities, it has to be inherently written into the code, which is illegal. In the human world, that’s [those kinds of activities] easier to call it “judgement”.