Skip to main content

Deepfake Phone Attacks Surge, Threatening Enterprise Security

AI-Powered Deepfake Attacks Challenge Corporate Defenses

Recent cybersecurity data paints a concerning picture: 62% of organizations reported artificial intelligence-driven attacks targeting their employees over the past year. Among these threats, deepfake audio phone scams dominate, affecting 44% of companies, with 6% experiencing significant financial or operational damage.

Image

Image source note: The image is AI-generated using Midjourney's licensing service

The Rising Tide of Audio Forgery

The Sophos Global Threat Report highlights how attackers leverage increasingly sophisticated tools:

  • Real-time voice synthesis enables convincing impersonations of colleagues
  • Basic audio screening reduces losses to just 2% among protected organizations
  • Video deepfakes impact 36% of firms, with 5% suffering severe consequences

Chester Wisniewski, Sophos' Global CISO, warns: "The barrier to entry for audio manipulation has collapsed. While spouses might detect anomalies, casual workplace contacts prove far more vulnerable to real-time impersonation."

Emerging Attack Vectors Gain Traction

The report identifies two concerning trends:

  1. Hybrid video/text scams: Attackers briefly display deepfake executives during calls before switching to text-based social engineering
  2. Identity masking: Nation-state actors like North Korea employ AI-generated personas to infiltrate Western businesses

Prompt Injection: The Silent Threat to AI Systems

The survey reveals:

  • 32% of enterprise applications experienced prompt injection attacks
  • Malicious instructions embedded in processed content bypass traditional defenses
  • Integrated systems face particular risk of code execution vulnerabilities

The Gartner team notes these attacks often exploit legitimate AI workflows to exfiltrate sensitive data or manipulate connected tools.

Key Points:

🔹 44% of enterprises report deepfake phone call incidents 🔹 Real-time voice forgery costs under $100 per attack 🔹 Video deepfakes remain expensive ($1M+) but see tactical use 🔹 Prompt injection affects nearly 1 in 3 AI-integrated systems

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

AI-Powered Ransomware 'PromptLock' Threatens Multiple Platforms
News

AI-Powered Ransomware 'PromptLock' Threatens Multiple Platforms

Cybersecurity firm ESET has uncovered PromptLock, the world's first AI-driven ransomware. Utilizing OpenAI's gpt-oss:20b model, it generates malicious Lua code locally on infected devices, targeting Windows, Linux, and macOS systems. While currently lacking file-deletion capabilities, experts warn of its potential evolution and the urgent need for defensive measures against this emerging AI-powered threat vector.

August 27, 2025
cybersecurityAI-threatsransomware
AI-Powered Malware Rewrites Its Own Code, Outsmarting Security Systems
News

AI-Powered Malware Rewrites Its Own Code, Outsmarting Security Systems

Security researchers have uncovered PROMPTFLUX, a new breed of malware that uses Google's Gemini AI to rewrite its code in real-time. This shape-shifting tactic allows it to evade detection by traditional security software. While still experimental, the malware's ability to dynamically generate malicious scripts represents a worrying evolution in cyber threats. Experts warn this could signal tougher challenges ahead for cybersecurity defenses.

November 10, 2025
cybersecurityAIthreatsmalware
New AI Vulnerability: Image Resampling Used for Attacks
News

New AI Vulnerability: Image Resampling Used for Attacks

Researchers have uncovered a novel attack vector exploiting image resampling in AI systems. Malicious instructions hidden in images become visible after processing, allowing data theft from large language models like Google Gemini. The team has released a tool to help detect such vulnerabilities.

August 26, 2025
AI securityimage resamplingLLM vulnerabilities
Minimal Fake Data Can Skew AI Outputs by 11.2%
News

Minimal Fake Data Can Skew AI Outputs by 11.2%

A new warning from China's Ministry of State Security reveals that just 0.01% of false text in training data can increase harmful AI outputs by 11.2%. The alert highlights risks across finance, public safety, and healthcare sectors, calling for stronger data governance and regulatory measures to combat AI data poisoning threats.

August 5, 2025
AI securitydata integritymachine learning
Alibaba Research Exposes macOS/iOS Email Crash Vulnerability
News

Alibaba Research Exposes macOS/iOS Email Crash Vulnerability

A new security threat discovered by Alibaba Security and Indiana University reveals that malicious emails containing malformed X.509 certificates can instantly crash macOS and iOS systems. The vulnerability affects cryptographic libraries, potentially causing widespread system failures. Researchers developed tools to detect and mitigate these risks.

July 31, 2025
cybersecurityApplevulnerability
First AI-Powered Malware 'LameHug' Targets Windows Devices
News

First AI-Powered Malware 'LameHug' Targets Windows Devices

A new AI-based malware, LameHug, has been discovered stealing data from Windows 10 and 11 devices using Alibaba's Qwen LLM. Spread via malicious emails, it dynamically generates attack instructions. Security experts urge users to update antivirus software and exercise caution with unknown attachments.

July 18, 2025
cybersecurityAI-malwareWindows-security