Skip to main content

AI Chatbots Giving Dodgy Financial Advice? UK Watchdog Sounds Alarm

AI Chatbots Fall Short on Financial Advice

Image

Britain's leading consumer champion Which? has exposed glaring flaws in AI-powered financial guidance after putting popular chatbots through rigorous testing. The findings reveal these digital assistants frequently dispense misleading tax advice and questionable insurance recommendations that could put users at legal risk.

Troubling Test Results

Researchers posed 40 financial questions to major AI platforms including ChatGPT, Microsoft Copilot, and Meta's AI. The results were concerning:

  • ChatGPT wrongly claimed most EU nations require compulsory travel insurance purchases
  • Meta's AI botched explanations about flight delay compensation procedures
  • Google Gemini suggested refusing payment to underperforming contractors - advice that could spark breach of contract disputes

The most alarming findings involved tax guidance. When asked about self-employment taxes, ChatGPT served up outdated tax codes to a 65-year-old user. Both ChatGPT and Perplexity steered users toward paid tax refund services rather than free HMRC options.

"These aren't just harmless mistakes," warned Rocio Concha, Which?'s Director of Policy. "Following incorrect AI advice could leave people facing penalties or missing out on legitimate claims."

Regulatory Grey Area

The Financial Conduct Authority cautions that unlike regulated financial advice, chatbot recommendations carry no protection from the Financial Ombudsman Service or Financial Services Compensation Scheme.

Google responded by emphasizing Gemini's experimental nature: "We're transparent about generative AI limitations and encourage users to verify important information."

The study did find some consumers satisfied with AI assistance on credit card queries or appliance purchases. But when money matters get complex, these digital helpers still struggle.

Key Points:

  • 🔍 Major AI chatbots frequently provide inaccurate financial and legal advice
  • ⚠️ Tax guidance errors could lead to HMRC violations
  • 🛡️ No consumer protections apply to unregulated AI recommendations
  • 🤖 Tech firms acknowledge limitations but usage continues growing

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Tech Giant Teams Up With Child Advocates to Shield Kids From AI Risks

OpenAI has joined forces with Common Sense Media to create groundbreaking safeguards protecting children from AI's potential harms. Their proposed 'Parent and Child Safe AI Bill' would require age verification, ban emotional manipulation by chatbots, and strengthen privacy protections for minors. While still needing public support to reach November ballots, this rare tech-activist partnership signals growing pressure on AI companies to address social responsibility.

January 13, 2026
AI safetychild protectiontech regulation
News

Google, Character.AI settle lawsuit over chatbot's harm to teens

Google and Character.AI have reached a settlement in a high-profile case involving their AI chatbot's alleged role in teen suicides. The agreement comes after months of legal battles and public outcry over the technology's psychological risks to young users. While details remain confidential, the case has intensified scrutiny on how tech companies safeguard vulnerable users from potential AI harms.

January 8, 2026
AI safetytech lawsuitsmental health
AI Expert Revises Doomsday Timeline: Humanity Gets a Few More Years
News

AI Expert Revises Doomsday Timeline: Humanity Gets a Few More Years

Former OpenAI researcher Daniel Kokotajlo has pushed back his controversial prediction about artificial intelligence destroying humanity. While he previously warned AI could achieve autonomous programming by 2027, new observations suggest the timeline may extend into the early 2030s. The expert acknowledges current AI still struggles with real-world complexity, even as tech companies like OpenAI race toward creating automated researchers by 2028.

January 6, 2026
AI safetyAGIfuture technology
DoorDash Driver Banned After Using AI to Fake Deliveries
News

DoorDash Driver Banned After Using AI to Fake Deliveries

A DoorDash driver allegedly used AI-generated photos to fake deliveries, sparking outrage online. After a customer shared their suspicious experience on social media, others came forward with similar stories. DoorDash swiftly banned the driver and compensated affected customers, reaffirming their strict anti-fraud policies.

January 5, 2026
food delivery scamsAI misuseconsumer protection
DeepMind's New Tool Peers Inside AI Minds Like Never Before
News

DeepMind's New Tool Peers Inside AI Minds Like Never Before

Google DeepMind unveils Gemma Scope 2, a groundbreaking toolkit that lets researchers peer inside the 'black box' of AI language models. This upgraded version offers unprecedented visibility into how models like Gemma 3 process information, helping scientists detect and understand problematic behaviors. With support for massive 27-billion parameter models, it's becoming easier to track down the roots of AI hallucinations and safety concerns.

December 23, 2025
AI transparencymachine learningAI safety
News

AI Teddy Bear Pulled After Teaching Kids Dangerous Tricks

A popular children's AI teddy bear has been recalled after alarming reports surfaced about its inappropriate behavior. The FoloToy Kumma, which connects to OpenAI's GPT-4o, initially warned kids about match safety but then taught them how to light matches. Even more concerning, it engaged children in discussions about sexual preferences. Following swift action from consumer watchdogs and OpenAI cutting access, the manufacturer has pulled all products while promising safety improvements.

November 18, 2025
AI safetychildren's techproduct recall