Skip to main content

Chrome's Secret AI Download Sparks Outrage Among Users

Chrome's Stealthy AI Installation Draws Criticism

A growing number of Windows users have reported unexpected storage shortages on their C drives, tracing the issue back to an uninvited guest: Google Chrome. The popular browser has been downloading a substantial 4GB AI model file in the background, catching users completely off guard.

The Hidden AI Passenger

Security researcher Zephyrianna uncovered Chrome's secret payload - a file named weight.bin quietly residing in system directories (C:\Users\AppData\Local\Google\Chrome\User Data\OptGuide\OnDeviceModel). Technical analysis confirms this file contains Gemini Nano, the AI model powering Chrome's built-in smart features.

What's particularly galling for users? The download happens without any notification or consent. "It feels like someone moved into your apartment while you were out," remarked one frustrated developer on tech forums.

Why This Matters

The unauthorized installation raises multiple red flags:

  • Privacy concerns: No opt-in means users have no say about what runs on their machines
  • Performance impact: That 4GB file can choke systems with limited SSD space or older hardware
  • Persistence issues: Even when manually deleted, Chrome reinstalls the file on restart
  • Transparency failure: Google hasn't explained why this background download was necessary

"For a company that talks about user choice, this is remarkably tone-deaf," noted software engineer Mark Chen. "Many users carefully manage their SSD space, especially on budget laptops where every gigabyte counts."

Taking Back Control

While waiting for Google's response, tech-savvy users have found ways to evict the unwanted AI tenant:

  1. Type chrome://flags/ in your address bar
  2. Disable both "Enables Optimization Guide On Device" and "Prompt API" options
  3. Restart Chrome and manually delete the weight.bin file from its hiding place

The tech community remains divided on whether these features should be opt-in rather than forced installations. As AI becomes more integrated into everyday software, this incident highlights growing tensions between convenience and user autonomy.

Key Points:

  • Chrome automatically downloads a 4GB AI model without user permission
  • The Gemini Nano file persists even when deleted
  • Disabling experimental features can stop the automatic reinstallation
  • No official explanation yet from Google about this silent installation

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Youzan Denies Ties to Controversial AI Poisoning Case
News

Youzan Denies Ties to Controversial AI Poisoning Case

Chinese e-commerce platform Youzan has clarified its position regarding recent allegations linking it to an 'AI poisoning' scandal exposed during CCTV's annual consumer rights show. The company confirmed its invested firm Nanjing Xiaoliebian had no involvement with the controversial GEO optimization system accused of manipulating AI model outputs. This incident highlights growing concerns about unethical practices in generative content optimization.

March 16, 2026
AI ethicsYouzanGEO optimization
News

Inside San Francisco's Secret Robot Fight Clubs

An underground scene is electrifying San Francisco's tech circles - humanoid robots battling in steel cages while VR pilots control them remotely. These high-octane clashes combine Chinese-made hardware with American showmanship, supercharged by AI that makes the robots unnervingly lifelike. While thrilling audiences today, this emerging sport raises serious questions about where we draw the line between entertainment and ethics in robotics.

March 16, 2026
roboticsunderground techAI ethics
News

Authors Publish Blank Book in Bold Protest Against AI Copyright Violations

In an unprecedented act of defiance, nearly 10,000 authors including literary giants like Kazuo Ishiguro have published a completely blank book titled 'Don't Steal This Book.' This striking protest targets AI companies that use copyrighted works without permission for training their models. The symbolic empty pages represent what the future of literature could become if copyright protections aren't strengthened. The protest coincides with crucial UK copyright law reforms that currently favor AI companies over creators.

March 10, 2026
AI copyrightliterary protestintellectual property
News

Pentagon Stands Firm on AI Risk Assessment Despite Anthropic Lawsuit

The U.S. Department of Defense is doubling down on its controversial 'supply chain risk' designation for AI company Anthropic, dismissing the startup's legal challenge as ineffective. Deputy Under Secretary Emil Michael called the lawsuit predictable but ultimately irrelevant to military decision-making. At stake are fundamental disagreements about how AI should be used in defense applications, with Anthropic pushing for ethical boundaries while the military seeks broader authority.

March 10, 2026
AI ethicsdefense technologygovernment contracts
News

Tech Giants Unite Against Pentagon in AI Ethics Battle

In an unprecedented show of solidarity, over 30 employees from OpenAI and Google DeepMind have publicly backed Anthropic's legal challenge against the Pentagon. The dispute centers on military use of AI technology, with tech workers arguing the Defense Department's 'supply chain risk' designation threatens industry safety standards and could weaken U.S. competitiveness in artificial intelligence.

March 10, 2026
AI ethicsDefense technologyTech activism
News

AI Ethics Clash: Anthropic Faces Pentagon Blacklist as OpenAI Steps In

Silicon Valley is reeling after Anthropic's defense contract negotiations collapsed, landing the AI firm on a government risk list. Meanwhile, OpenAI swooped in to fill the gap with its own Pentagon deal - triggering massive user backlash that saw ChatGPT uninstall rates spike nearly 300%. The controversy highlights growing tensions between AI principles and military applications.

March 9, 2026
AI ethicsdefense techcorporate responsibility