Skip to main content

New AI Vulnerability: Image Resampling Used for Attacks

AI Systems Vulnerable to Hidden Image-Based Attacks

Cybersecurity researchers from Trail of Bits have discovered a critical vulnerability affecting multiple AI systems that process images. The attack exploits standard image resampling techniques to hide malicious commands that only become visible after processing.

How the Attack Works

The technique, dubbed "image resampling attack," takes advantage of how AI systems typically reduce image resolution for efficiency. Attackers embed malicious instructions in specific image areas that appear normal at full resolution but transform into readable text after being processed by common algorithms like bicubic resampling.

Image

Widespread Impact on Major Platforms

In controlled tests, the researchers successfully compromised several prominent AI systems including:

  • Google Gemini CLI
  • Vertex AI Studio
  • Google Assistant
  • Genspark

One alarming demonstration showed the attack stealing Google calendar data and sending it to an external email address without user consent through Gemini CLI.

Defensive Measures Proposed

The research team has released Anamorpher, an open-source tool to help security professionals test for this vulnerability. They recommend three key defensive strategies:

  1. Strict size limits on uploaded images
  2. Preview functionality showing post-resampling results
  3. Explicit user confirmation for sensitive operations like data exports

Image

The researchers emphasize that while these measures help, the ultimate solution requires fundamental redesigns of system architectures to prevent such prompt injection attacks entirely.

Key Points:

  • New attack vector exploits standard image processing in AI systems
  • Malicious commands hidden in images emerge after resampling
  • Affects major platforms including Google's AI services
  • Researchers provide detection tool and mitigation recommendations
  • Highlights need for more secure system design patterns

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Minimal Fake Data Can Skew AI Outputs by 11.2%
News

Minimal Fake Data Can Skew AI Outputs by 11.2%

A new warning from China's Ministry of State Security reveals that just 0.01% of false text in training data can increase harmful AI outputs by 11.2%. The alert highlights risks across finance, public safety, and healthcare sectors, calling for stronger data governance and regulatory measures to combat AI data poisoning threats.

August 5, 2025
AI securitydata integritymachine learning
OpenAI flags major security risks as AI gets smarter"  

(58 characters)
News

OpenAI flags major security risks as AI gets smarter" (58 characters)

OpenAI has raised urgent warnings about escalating cybersecurity threats as its next-generation AI models grow more powerful. The company revealed these advanced systems now pose significantly higher risks if misused, though specific vulnerabilities weren't disclosed. This alert comes as AI capabilities surge ahead—while we're still scrambling to build proper safeguards. Could these brilliant tools become dangerous weapons in the wrong hands? Security experts are sounding alarms, urging faster development of protective measures before these risks spiral out of control. The report underscores a troubling paradox: the smarter AI gets, the more we need to worry about its potential for harm. (98 words)

December 12, 2025
AI securitycybersecurity risksOpenAI
AI-Powered Malware Rewrites Its Own Code, Outsmarting Security Systems
News

AI-Powered Malware Rewrites Its Own Code, Outsmarting Security Systems

Security researchers have uncovered PROMPTFLUX, a new breed of malware that uses Google's Gemini AI to rewrite its code in real-time. This shape-shifting tactic allows it to evade detection by traditional security software. While still experimental, the malware's ability to dynamically generate malicious scripts represents a worrying evolution in cyber threats. Experts warn this could signal tougher challenges ahead for cybersecurity defenses.

November 10, 2025
cybersecurityAIthreatsmalware
Ant Group Unveils Multilingual AI Framework for Document Security
News

Ant Group Unveils Multilingual AI Framework for Document Security

Ant Group has introduced a groundbreaking multilingual visual model training framework at the Hong Kong FinTech Festival. The technology enhances document authentication across 119 languages and improves fraud detection through visual analysis and logical reasoning, outperforming major competitors like GPT-4o in benchmark tests.

November 4, 2025
AI securitymultilingual AIdocument authentication
Deepfake Phone Attacks Surge, Threatening Enterprise Security
News

Deepfake Phone Attacks Surge, Threatening Enterprise Security

A new report reveals 62% of companies faced AI-driven attacks last year, with deepfake audio calls emerging as the most prevalent threat. Sophos warns of sophisticated real-time voice forgery techniques, while prompt injection attacks target AI systems.

September 24, 2025
cybersecuritydeepfakeAI-threats
AI-Powered Ransomware 'PromptLock' Threatens Multiple Platforms
News

AI-Powered Ransomware 'PromptLock' Threatens Multiple Platforms

Cybersecurity firm ESET has uncovered PromptLock, the world's first AI-driven ransomware. Utilizing OpenAI's gpt-oss:20b model, it generates malicious Lua code locally on infected devices, targeting Windows, Linux, and macOS systems. While currently lacking file-deletion capabilities, experts warn of its potential evolution and the urgent need for defensive measures against this emerging AI-powered threat vector.

August 27, 2025
cybersecurityAI-threatsransomware