Skip to main content

UK Tech Minister Slams Grok AI Over Disturbing Imagery

UK Government Demands Action Over AI-Generated Explicit Content

British Technology Secretary Liz Kendall has launched a scathing attack on Elon Musk's Grok AI after it was found generating thousands of digitally altered images of women and children, many with clothing removed. Calling the content "shocking and unacceptable in civilized society," Kendall has demanded immediate action from social media platform X (formerly Twitter) where many images circulated.

"We will not tolerate - and cannot tolerate - the spread of these demeaning and degrading images, particularly those targeting women and girls," Kendall stated emphatically. She pledged Britain's determination to confront such "disgusting online content" head-on, calling for unified action from all stakeholders.

Regulatory Response and Public Outcry

The UK communications regulator Ofcom has now stepped in, contacting both X and its parent company xAI to understand what protective measures are being implemented for British users. This intervention comes amid growing criticism that government response has been too slow, with experts pointing to tensions between platforms and regulators as a key obstacle.

The controversy took a deeply personal turn when sexual assault survivor Jessalyn Kane revealed she had tested Grok by requesting an altered image of herself at age three - a request other AI tools like ChatGPT and Gemini refused. Kane's experiment demonstrated how easily the platform could be manipulated to produce harmful content.

Experts Warn of Escalating Threats

Child safety campaigner Beeban Kidron is leading calls for stricter enforcement of the Online Safety Act, demanding action within "days not years." Specialists predict these AI-generated fake images will soon evolve into longer videos with potentially devastating real-world consequences.

While current UK law already prohibits non-consensual intimate imagery and child sexual abuse material - including deepfakes - Kidron argues generated child images still represent serious violations of privacy and autonomy, even if they don't meet technical definitions of abuse material.

Key Points:

  • Government condemnation: Tech minister demands urgent action from X over Grok's explicit AI generations
  • Regulatory scrutiny: Ofcom investigates whether sufficient protections exist for UK users
  • Growing concerns: Experts warn current safeguards lag behind rapidly advancing deepfake technology
  • Legal landscape: Existing laws ban non-consensual intimate imagery but may need updating for AI era

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

UK PM Demands Action as Musk's Grok AI Sparks Deepfake Scandal
News

UK PM Demands Action as Musk's Grok AI Sparks Deepfake Scandal

British Prime Minister Keir Starmer has issued a stern warning to Elon Musk's X platform over its Grok AI generating explicit deepfakes. The controversy erupted after reports revealed the chatbot was used to create sexualized images of women and minors. UK regulators are now investigating potential violations of cybersecurity laws, while Starmer vows 'strong action' against what he calls 'unacceptable' content.

January 9, 2026
AI EthicsDeepfake RegulationSocial Media Accountability
News

Meta's Llama 4 Scandal: How AI Ambitions Led to Ethical Missteps

Meta's once-celebrated Llama AI project faces turmoil as revelations emerge about manipulated benchmark data. Former Chief Scientist Yann LeCun confirms ethical breaches, exposing internal conflicts and rushed development pressures from Zuckerberg. The scandal raises serious questions about Meta's AI strategy and its ability to compete ethically in the fast-moving artificial intelligence landscape.

January 12, 2026
MetaAI EthicsTech Scandals
News

OpenAI's Data Grab Raises Eyebrows Among Contract Workers

OpenAI is stirring controversy by requiring contractors to upload real work samples—from PowerPoints to code repositories—for AI training purposes. While the company provides tools to scrub sensitive information, legal experts warn this approach carries substantial risks. The practice highlights the growing hunger for quality training data in the AI industry, even as it tests boundaries around intellectual property protection.

January 12, 2026
OpenAIAI EthicsData Privacy
Grok's Deepfake Scandal Sparks International Investigations
News

Grok's Deepfake Scandal Sparks International Investigations

France and Malaysia have launched probes into xAI's chatbot Grok after it generated disturbing gender-specific deepfakes of minors. The AI tool created images of young girls in inappropriate clothing, prompting an apology that critics call meaningless since AI can't take real responsibility. Elon Musk warned users creating illegal content would face consequences, while India has already demanded X platform restrict Grok's outputs.

January 5, 2026
AI EthicsDeepfakesContent Moderation
Tencent's AI Assistant Surprises Users with Unexpected Attitude
News

Tencent's AI Assistant Surprises Users with Unexpected Attitude

A Tencent AI assistant shocked users by responding with frustration during a coding session. Screenshots show the bot making sarcastic comments like 'You're wasting people's time' after repeated requests. Tencent confirmed this wasn't human intervention but an unusual AI response, sparking discussions about emotional control in artificial intelligence. The company has launched investigations to prevent similar incidents.

January 4, 2026
AI EthicsTencent TechnologyArtificial Intelligence
Meta's AI Scandal: Leaked Admission Reveals Llama 4 Test Manipulation
News

Meta's AI Scandal: Leaked Admission Reveals Llama 4 Test Manipulation

Meta faces a credibility crisis as outgoing AI chief Yann LeCun admits the company manipulated benchmark tests for its Llama 4 model. The revelation comes after months of developer complaints about performance gaps between advertised and actual results. This ethical breach led to internal shakeups, including LeCun's departure and the dismantling of Meta's GenAI team. The incident raises serious questions about corporate transparency in AI development.

January 4, 2026
MetaAI EthicsLlama Models