Instagram photo | iStock | Getty Images
Financial experts say the financial capabilities of artificial intelligence platforms are improving to the extent that they could replace human financial advisors in the future.
However, they said AI has a major drawback compared to human advisors: a lack of fiduciary responsibility. And that legal gray area doesn’t seem to be resolved anytime soon, they said.
Fiduciary duty is a legal duty that many financial advisors, as well as professionals in other fields such as lawyers and doctors, owe to their clients. It essentially means that they put their customers’ best interests ahead of their own.
“The question we have to solve is not whether there is enough expertise in AI,” says Andrew Lo, professor of financial engineering at MIT Sloan School of Management and director of the Institute for Financial Engineering. “The answer for now is clearly that AI has the[financial]expertise.”
“What they don’t have is a fiduciary duty,” Roe said. “They don’t have the ability to make the same amount of mistakes and suffer the consequences as human advisors.”
Roe said advisers who violate their fiduciary duties can be exposed to fairly serious consequences, including regulatory penalties, civil liability, and criminal charges.
He said the idea of putting a customer’s interests ahead of one’s own is “irresistible” and carries no responsibility or liability.
“Unresolved” legal issues
Many people seem to point to large language models (such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini) for financial advice.
An Intuit Credit Karma poll released in September found that two-thirds, or 66%, of Americans who have used generative AI say they have used it for financial advice. The share of Millennials and Gen Z rises to 82%.
The survey of 1,019 adults found that approximately 85% of respondents who had used GenAI for financial advice said they acted on the recommendations provided.
“People are turning to these services for all kinds of advice, and they’re getting it, and this seems like a big unresolved regulatory issue,” said Sebastian Benthall, a senior fellow at the Information Law Institute at New York University School of Law.
“Who is actually responsible? And can people really rely on a product to do something like this without the backing of a company that has a fiduciary responsibility?” Benthall said. “It’s really unresolved.”
Why you shouldn’t blindly trust AI and humans
That said, there are some good use cases for AI in financial planning, Lo said.
AI is “very good” at providing resources online about various financial concepts that the average person may not understand, Lo said. For example, if someone wants answers to basic questions about Medicare, AI can usually provide a reliable summary, he said.
Although AI output is sophisticated in many financial aspects, Lo said consumers generally should not blindly trust the answers to questions about their household finances.
“You need to be very careful when making very specific calculations of your own personal circumstances,” he said. “One of my particular concerns about the LLM is that no matter what you ask, you always get an answer that sounds authoritative, even if it isn’t.”
In this sense, he said, there is “a real need” to double and triple check the AI’s answers.
Perhaps surprisingly, AI is not good at doing financial calculations, Lo said. So numbers-based financial planning questions related to taxes, for example, are usually best avoided.
They don’t have the ability to make as many mistakes as human advisors and suffer the consequences.
Andrew Law
Professor of Finance and Director of the Institute for Financial Engineering, MIT Sloan School of Management
In a March social media post, James Burnham, head of legal and government affairs at Elon Musk’s xAI, said the company’s AI platform Grok “doesn’t provide tax advice, so always check yourself.”
Of course, many human financial advisors provide advice to their clients, and it is up to the client to decide whether to act on it or not.
“I think that’s the way I think about LLMs. LLMs are very helpful in giving you different options and explaining how those options work. But you always have to remember that the advice they give you can be wrong,” Lo said.
“But I would argue that it also applies to human financial advisors,” he says.
Not all human advisors are fiduciaries.
SD Production | iStock | Getty Images
Not all human financial advisors are fiduciaries either.
The financial advice field is a minefield of various legal relationships. These legal obligations may vary depending on factors such as whether the consumer is speaking with a stockbroker, registered investment advisor, insurance agent, or other intermediary.
For example, a U.S. Department of Labor rule issued during the Biden administration requires fiduciary duties on intermediaries that recommend transferring funds from 401(k) plans to individual retirement accounts, which can cost hundreds of thousands of dollars.
But that rule recently disappeared after the Trump administration stopped defending it in court. This means that many financial intermediaries do not have a fiduciary duty to advise on rollovers. As a result, legal experts recommend that consumers approach such rollover recommendations with caution due to the potential for conflicts of interest.

New York University’s Benthall proposed a similar legal predicament regarding AI advice. Today’s big AI companies are primarily based in the US, so if an AI suggests investors invest their retirement savings in US stocks, that advice could be considered self-dealing or a financial conflict of interest.
However, Jiaying Jiang, an associate professor at the University of Florida Levin School of Law who studies AI and fiduciary duty, points out that companies providing AI services do not appear to be paid for advising retail investors and are therefore not fiduciaries.
Who is really responsible? Also, can people really rely on a product to do something like this without the backing of a company that has a fiduciary responsibility? It’s really an open question.
Sebastian Benthall
Senior Fellow, Information Law Institute, New York University School of Law
But financial advisers who have a fiduciary duty to their clients could breach that duty by using AI, Zhang said.
For example, if an advisor uses AI to give a specific recommendation to a client, and that recommendation is not in the client’s best interest, the advisor, not the company backing the AI platform, is responsible, Jiang said.
Ultimately, Lo said he believes government policy needs to change to provide fiduciary protection for consumers who receive financial advice from AI.
Until then, “we won’t get to the point where we can fully delegate these (financial) decisions,” Lo said.
“But I believe it will happen eventually,” he said.
