Skip to main content

Musk Takes Aim at OpenAI in Court: Claims ChatGPT Risks Outweigh Benefits

Musk Clashes With OpenAI Over AI Safety Concerns

In dramatic courtroom testimony this week, tech billionaire Elon Musk launched scathing criticism at OpenAI while defending his own artificial intelligence ventures. The SpaceX and Tesla CEO claimed ChatGPT poses serious risks that his competing xAI platform avoids.

"Let me be clear - no one has taken their life because of Grok," Musk stated bluntly. "But we know for certain people have committed suicide after interactions with ChatGPT." The assertion drew audible reactions from observers in the San Francisco courtroom.

The case centers on Musk's involvement with a March 2023 open letter signed by over 1,100 AI experts. That document urged labs to halt development of systems surpassing GPT-4's capabilities for at least six months, citing concerns about uncontrolled AI advancement.

Musk portrayed himself as motivated by safety rather than competition. "This was never about business rivalry," he insisted under oath. "When I helped create OpenAI years ago, it was precisely because I feared Google would monopolize AI without proper safeguards."

The billionaire recounted troubling conversations with Google co-founder Larry Page that allegedly showed disregard for AI risks. "Larry didn't seem to care about safety at all," Musk testified. "That complacency scared me then and still does today."

Hypocrisy Allegations Surface

While positioning xAI as the responsible alternative, Musk faced tough questions about his own company's track record. Earlier this year, regulators launched probes after Grok generated explicit imagery that spread across social media platforms.

The California Attorney General's office confirmed an ongoing investigation into xAI's content moderation practices. European Union privacy watchdogs have also initiated separate inquiries regarding potential violations.

Musk dismissed these concerns during cross-examination. "Every new technology faces growing pains," he argued. "What matters is our commitment to prioritizing safety over profits - something OpenAI abandoned when they became a commercial entity."

The testimony revealed tensions dating back to OpenAI's nonprofit origins in 2015. Musk claimed the organization strayed from its mission when it established a for-profit arm in 2019, though court records show he previously pledged nearly $45 million in funding despite later claiming higher figures publicly.

Broader Implications for AI Development

Legal analysts say the case highlights growing scrutiny of AI companies' responsibilities as their creations become more powerful and pervasive. "This isn't just corporate sparring," noted Stanford Law professor Amanda Reeves outside court. "We're seeing real-world consequences emerge from decisions made years ago about how to develop these technologies responsibly."

The proceedings concluded without resolution but offered rare public insight into Silicon Valley's fractious debates over artificial intelligence ethics and governance approaches moving forward.

Key Points:

  • Elon Musk testified that ChatGPT poses greater risks than his xAI system Grok
  • The lawsuit stems from a controversial 2023 letter urging temporary pauses in advanced AI development
  • While criticizing OpenAI's profit motives, Musk faces regulatory probes into explicit content generated by Grok
  • The case reveals deep divisions over balancing innovation with safeguards as AI capabilities accelerate

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

GPT-5.4 Arrives With Mind-Reading AI and Million-Token Memory
News

GPT-5.4 Arrives With Mind-Reading AI and Million-Token Memory

OpenAI's latest model, GPT-5.4, introduces revolutionary features that bring us closer to truly intelligent digital assistants. The new Thinking mode lets users peer into the AI's reasoning process, while million-token memory enables handling massive documents. Perhaps most impressive are its native computer operation abilities - this AI doesn't just talk, it can actually work across your applications.

March 6, 2026
AIOpenAIGPT
OpenAI Empowers Developers with Free AI Security Tools
News

OpenAI Empowers Developers with Free AI Security Tools

OpenAI is rolling out a generous support program for open-source developers, offering six months of ChatGPT Pro access plus cutting-edge code security tools powered by GPT-5.4. The initiative aims to strengthen software ecosystems by helping maintainers catch vulnerabilities early. While access to the premium Codex Security features will be selective, the program welcomes diverse coding environments beyond OpenAI's native tools.

March 9, 2026
OpenAIDeveloper ToolsCode Security
News

NVIDIA Pulls Back from OpenAI: A Billion-Dollar Partnership Cools

NVIDIA's surprising decision to scale back its multi-billion dollar investment in OpenAI signals shifting tides in the AI industry. The chip giant's CEO recently called their $3 billion commitment likely their last, walking back from earlier plans for a $10 billion partnership. This comes as OpenAI faces internal turmoil, including executive departures and ethical controversies. Industry watchers see NVIDIA's move as both a response to OpenAI's instability and a cautious step against potential AI valuation bubbles.

March 9, 2026
AI InvestmentNVIDIAOpenAI
News

ChatGPT's Adult Mode Hits Another Snag as OpenAI Shifts Focus

OpenAI has delayed its controversial 'Adult Mode' feature for ChatGPT yet again, prioritizing core AI improvements instead. While code hints suggest the feature hasn't been abandoned, the company is focusing first on enhancing intelligence and personalization. The postponement highlights the ongoing tension between user demands and ethical considerations in AI development.

March 9, 2026
OpenAIChatGPTAI Ethics
News

OpenAI Robotics Chief Quits Over Military AI Concerns

Caitlin Kalinowski, OpenAI's hardware and robotics lead, resigned abruptly this week citing ethical concerns about the company's military partnerships. The former Meta AR glasses developer warned about unchecked surveillance and autonomous weapons in social media posts. Her departure exposes growing tensions within OpenAI as it navigates defense contracts while trying to maintain ethical boundaries.

March 9, 2026
OpenAIAI EthicsMilitary Tech
News

Broadcom Bets Big on AI Chips: $100 Billion Revenue Goal by 2027

Broadcom CEO Hock Tan stunned investors with bold predictions during Wednesday's earnings call, forecasting AI chip revenue will smash the $100 billion mark within three years. The announcement sent Broadcom shares soaring over 5% after hours, fueled by strong first-quarter results showing AI revenue doubling to $8.4 billion. With tech giants like Google and Meta driving demand for custom chips, Broadcom appears well-positioned to capitalize on the AI hardware boom.

March 6, 2026
SemiconductorsArtificialIntelligenceTechIndustry