Skip to main content

AI-Powered Ransomware 'PromptLock' Threatens Multiple Platforms

First AI-Generated Ransomware Emerges as Cross-Platform Threat

Cybersecurity researchers have identified PromptLock, the world's first confirmed AI-powered ransomware, marking a dangerous evolution in cyberattack methodologies. Discovered by ESET's threat intelligence team, this malicious software represents a significant leap in offensive cybersecurity capabilities by leveraging artificial intelligence.

How PromptLock Operates

The ransomware utilizes OpenAI's open-source gpt-oss:20b language model to generate malicious Lua code directly on compromised devices. Unlike traditional ransomware that relies on pre-written attack scripts, PromptLock dynamically creates its payload through AI-generated code execution.

Image

Key operational characteristics include:

  • Cross-platform functionality (Windows, Linux, macOS)
  • Local code generation avoiding cloud-based detection
  • File search, theft, and encryption capabilities
  • High adaptability through prompt engineering

Technical Implementation Challenges

The gpt-oss:20b model presents unique implementation hurdles with its 13GB size and substantial VRAM requirements. Attackers circumvent these limitations through:

  1. Internal proxy networks
  2. External server tunneling
  3. Ollama API integration for remote model access

"This represents a paradigm shift in malware development," explained an ESET spokesperson. "The AI component allows for unprecedented adaptability across systems and environments."

Security Community Response

While currently classified as a proof-of-concept, security experts express grave concerns:

"Current defense systems aren't prepared for AI-generated malware that can modify its behavior in real-time," warned John Scott-Railton of Citizen Lab.

OpenAI has acknowledged the report, stating they've implemented safeguards to prevent model misuse while continuing to enhance protective mechanisms.

Implications for Cybersecurity Defense

The emergence of PromptLock signals several critical developments:

  1. Local AI model exploitation as a new attack vector
  2. Increased difficulty in signature-based detection
  3. Need for behavioral analysis-focused security solutions
  4. Potential for rapid malware evolution through prompt iteration

The cybersecurity community faces urgent challenges in developing countermeasures against this new class of AI-powered threats.

Key Points:

  • First confirmed case of AI-generated ransomware in active development
  • Uses OpenAI's gpt-oss:20b model for local malicious code generation
  • Threatens Windows, Linux and macOS systems equally
  • Demonstrates potential for rapid adaptation through prompt engineering
  • Highlights critical need for next-generation cybersecurity defenses

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

Deepfake Phone Attacks Surge, Threatening Enterprise Security
News

Deepfake Phone Attacks Surge, Threatening Enterprise Security

A new report reveals 62% of companies faced AI-driven attacks last year, with deepfake audio calls emerging as the most prevalent threat. Sophos warns of sophisticated real-time voice forgery techniques, while prompt injection attacks target AI systems.

September 24, 2025
cybersecuritydeepfakeAI-threats
AI-Powered Malware Rewrites Its Own Code, Outsmarting Security Systems
News

AI-Powered Malware Rewrites Its Own Code, Outsmarting Security Systems

Security researchers have uncovered PROMPTFLUX, a new breed of malware that uses Google's Gemini AI to rewrite its code in real-time. This shape-shifting tactic allows it to evade detection by traditional security software. While still experimental, the malware's ability to dynamically generate malicious scripts represents a worrying evolution in cyber threats. Experts warn this could signal tougher challenges ahead for cybersecurity defenses.

November 10, 2025
cybersecurityAIthreatsmalware
New AI Vulnerability: Image Resampling Used for Attacks
News

New AI Vulnerability: Image Resampling Used for Attacks

Researchers have uncovered a novel attack vector exploiting image resampling in AI systems. Malicious instructions hidden in images become visible after processing, allowing data theft from large language models like Google Gemini. The team has released a tool to help detect such vulnerabilities.

August 26, 2025
AI securityimage resamplingLLM vulnerabilities
Minimal Fake Data Can Skew AI Outputs by 11.2%
News

Minimal Fake Data Can Skew AI Outputs by 11.2%

A new warning from China's Ministry of State Security reveals that just 0.01% of false text in training data can increase harmful AI outputs by 11.2%. The alert highlights risks across finance, public safety, and healthcare sectors, calling for stronger data governance and regulatory measures to combat AI data poisoning threats.

August 5, 2025
AI securitydata integritymachine learning
Alibaba Research Exposes macOS/iOS Email Crash Vulnerability
News

Alibaba Research Exposes macOS/iOS Email Crash Vulnerability

A new security threat discovered by Alibaba Security and Indiana University reveals that malicious emails containing malformed X.509 certificates can instantly crash macOS and iOS systems. The vulnerability affects cryptographic libraries, potentially causing widespread system failures. Researchers developed tools to detect and mitigate these risks.

July 31, 2025
cybersecurityApplevulnerability
First AI-Powered Malware 'LameHug' Targets Windows Devices
News

First AI-Powered Malware 'LameHug' Targets Windows Devices

A new AI-based malware, LameHug, has been discovered stealing data from Windows 10 and 11 devices using Alibaba's Qwen LLM. Spread via malicious emails, it dynamically generates attack instructions. Security experts urge users to update antivirus software and exercise caution with unknown attachments.

July 18, 2025
cybersecurityAI-malwareWindows-security