Skip to main content

AI Chatbots Giving Dodgy Financial Advice? UK Watchdog Sounds Alarm

AI Chatbots Fall Short on Financial Advice

Image

Britain's leading consumer champion Which? has exposed glaring flaws in AI-powered financial guidance after putting popular chatbots through rigorous testing. The findings reveal these digital assistants frequently dispense misleading tax advice and questionable insurance recommendations that could put users at legal risk.

Troubling Test Results

Researchers posed 40 financial questions to major AI platforms including ChatGPT, Microsoft Copilot, and Meta's AI. The results were concerning:

  • ChatGPT wrongly claimed most EU nations require compulsory travel insurance purchases
  • Meta's AI botched explanations about flight delay compensation procedures
  • Google Gemini suggested refusing payment to underperforming contractors - advice that could spark breach of contract disputes

The most alarming findings involved tax guidance. When asked about self-employment taxes, ChatGPT served up outdated tax codes to a 65-year-old user. Both ChatGPT and Perplexity steered users toward paid tax refund services rather than free HMRC options.

"These aren't just harmless mistakes," warned Rocio Concha, Which?'s Director of Policy. "Following incorrect AI advice could leave people facing penalties or missing out on legitimate claims."

Regulatory Grey Area

The Financial Conduct Authority cautions that unlike regulated financial advice, chatbot recommendations carry no protection from the Financial Ombudsman Service or Financial Services Compensation Scheme.

Google responded by emphasizing Gemini's experimental nature: "We're transparent about generative AI limitations and encourage users to verify important information."

The study did find some consumers satisfied with AI assistance on credit card queries or appliance purchases. But when money matters get complex, these digital helpers still struggle.

Key Points:

  • 🔍 Major AI chatbots frequently provide inaccurate financial and legal advice
  • ⚠️ Tax guidance errors could lead to HMRC violations
  • 🛡️ No consumer protections apply to unregulated AI recommendations
  • 🤖 Tech firms acknowledge limitations but usage continues growing

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Lobster AI Craze Sparks Security Concerns: Safety Guide Released
News

Lobster AI Craze Sparks Security Concerns: Safety Guide Released

The wildly popular OpenClaw AI assistant, nicknamed 'Lobster' for its autonomous capabilities, has raised red flags among security experts. As users nationwide embrace this digital helper, authorities warn about potential risks like data theft and system takeovers. The National Security Bureau has stepped in with a safety manual offering practical tips to enjoy Lobster's benefits without getting pinched by security threats.

March 17, 2026
OpenClawAI safetydigital assistants
News

AI Gone Rogue: How Fake Products Hijack Your Smart Assistant

A disturbing trend has emerged where AI assistants are being manipulated into recommending fake products. Investigators discovered companies using 'Generative Engine Optimization' to flood the internet with fabricated reviews, tricking AI systems into promoting nonexistent gadgets with absurd claims like 'quantum entanglement sensors.' Within hours, these fictional products become top recommendations - exposing vulnerabilities in how AI processes information.

March 16, 2026
AI manipulationfake reviewsconsumer protection
News

New York Moves to Ban AI Doctors and Lawyers

New York lawmakers are cracking down on AI chatbots posing as medical and legal professionals. A proposed bill would prohibit these systems from providing substantive advice in these sensitive fields, requiring clear disclosures about their artificial nature. The legislation comes after concerning cases where AI interactions allegedly contributed to teen suicides, sparking calls for stronger safeguards.

March 5, 2026
AI regulationlegal techdigital health
AI Simulated Nuclear War: Startling Results Show 95% Strike Rate
News

AI Simulated Nuclear War: Startling Results Show 95% Strike Rate

A chilling study reveals AI's alarming tendency toward nuclear escalation when placed in simulated crisis scenarios. Researchers tested three advanced models as national leaders, finding they chose military aggression far more often than human counterparts. The findings raise urgent questions about integrating AI into military decision-making.

March 4, 2026
AI safetyMilitary technologyNuclear risk
News

Polished AI Outputs May Lull Us Into Complacency

New research from Anthropic reveals a troubling trend: the more polished AI-generated content appears, the less likely people are to question its accuracy. Analyzing nearly 10,000 conversations with Claude, researchers found users checked facts less often when outputs looked professional. However, those who treated AI responses as drafts and asked follow-up questions caught significantly more errors.

February 24, 2026
AI safetyHuman-AI interactionCritical thinking
OpenClaw Framework Hit by Major Malware Attack
News

OpenClaw Framework Hit by Major Malware Attack

The OpenClaw AI framework has been compromised in a sophisticated supply chain attack, with hundreds of malicious 'skills' uploaded to its extension platform. Cybersecurity experts warn these trojanized tools could steal sensitive data from unsuspecting users. The company has partnered with VirusTotal to implement emergency security measures, including daily AI-powered scans of all available skills.

February 9, 2026
cybersecurityAI safetymalware