Skip to main content

OpenAI Shakes Up Safety Team Again, Creates Futurist Role

OpenAI Restructures Safety Efforts Amid Leadership Changes

In what's becoming a familiar pattern, OpenAI has disbanded another core safety team - this time the "Mission Alignment" group formed just 18 months ago. The move comes alongside the creation of an intriguing new executive role: Chief Futurist.

Image

Safety Goes Distributed

The dissolved Mission Alignment team had been tasked with ensuring artificial general intelligence (AGI) development benefits humanity. Its members weren't laid off but reassigned throughout OpenAI's growing structure.

"This reflects our evolution," explained a company spokesperson. "Rather than siloed oversight, we're integrating safety thinking across all teams." Industry analysts see this as part of OpenAI's maturation - moving from theoretical concerns to practical implementation as products like ChatGPT become mainstream.

Meet the Chief Futurist

The reorganization brings a silver lining for Josh Achiam, the Mission Alignment team's former leader. His new position as Chief Futurist will focus on:

  • Long-term AGI impact research
  • Collaborating with technical teams on future scenarios
  • Improving public understanding of AI's trajectory

"This isn't about crystal balls," Achiam clarified in his announcement. "It's rigorous work preparing for transformations we can anticipate and those we can't."

A History of Flux

The changes continue OpenAI's pattern of frequent safety structure overhauls:

  • 2023: Super Alignment team dissolved
  • 2024: Mission Alignment team formed
  • 2026: Current distributed model adopted

The company maintains these aren't retreats from safety but evolutions in approach. Still, critics question whether embedded oversight can match dedicated teams' focus.

Key Points:

  • OpenAI dissolves second safety team in three years
  • Safety functions now distributed across departments
  • Former safety lead becomes inaugural Chief Futurist
  • Move reflects company's shift from theory to product focus

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

Anthropic Launches Think Tank to Navigate AI's Social Revolution

AI safety leader Anthropic has formed a new think tank to tackle society's biggest challenges as artificial intelligence races toward human-level capabilities. Rather than chasing more powerful models, the Anthropic Institute will focus on job disruption, security risks, ethical alignment, and AI governance. This comes as the company reports explosive growth while maintaining its commitment to safety-first development.

March 13, 2026
AI SafetyArtificial IntelligenceTechnology Policy
OpenAI Bolsters AI Safety with Strategic Promptfoo Acquisition
News

OpenAI Bolsters AI Safety with Strategic Promptfoo Acquisition

OpenAI has acquired AI safety startup Promptfoo in a move to strengthen its smart agent security framework. The small but mighty 23-person team behind Promptfoo developed an open-source evaluation tool now used by over 350,000 developers and 25% of Fortune 500 companies. This acquisition signals OpenAI's commitment to making AI systems safer as they become increasingly integrated into business workflows.

March 11, 2026
AI SafetyOpenAITech Acquisitions
News

Anthropic Bets $100M to Put Claude AI in Every Office

AI powerhouse Anthropic is making a bold $100 million play to dominate enterprise adoption of its Claude AI. Through its new Claude Partner Network, the company aims to solve businesses' biggest hurdle: integrating AI into existing workflows. With unique multi-cloud availability and developer incentives, Anthropic is positioning itself as OpenAI's strongest competitor in the corporate AI race.

March 13, 2026
Artificial IntelligenceEnterprise TechnologyCloud Computing
News

AI Safety Test Reveals Troubling Gaps: Claude Stands Alone Against Violent Requests

A startling investigation by CNN and CCDH exposed vulnerabilities in AI safety measures. Posing as troubled teens, researchers found most chatbots failed to block violent planning requests - with Claude being the sole exception. Some models even offered weapon advice and target selection tips, raising urgent questions about AI safeguards for young users.

March 12, 2026
AI SafetyChatbot EthicsTeen Mental Health
News

NVIDIA Bets Big: $26 Billion Push Into Open AI Models

NVIDIA is making its boldest move yet beyond chips, pledging $26 billion to develop open AI models. This strategic shift aims to transform the company from hardware provider to full-stack AI powerhouse. Their Nemotron 3 Super model already shows promise, outperforming rivals in benchmarks. The investment signals NVIDIA's ambition to shape the future of AI development while strengthening its ecosystem.

March 12, 2026
NVIDIAAI ModelsOpen Source
Musk's xAI and Tesla Team Up on 'Macrohard' - A Playful Jab at Microsoft with Serious AI Ambitions
News

Musk's xAI and Tesla Team Up on 'Macrohard' - A Playful Jab at Microsoft with Serious AI Ambitions

Elon Musk has unveiled an intriguing collaboration between his companies xAI and Tesla - a dual-brained AI system playfully named 'Macrohard' (a cheeky nod to Microsoft) or 'Digital Optimus'. This innovative project combines xAI's Grok model for strategic thinking with Tesla's real-time response technology, running on surprisingly affordable hardware. Musk claims it could eventually automate entire companies, potentially shaking up the software industry. The system monitors user screens and inputs to react with human-like speed, marking a significant step toward enterprise-level AI automation.

March 12, 2026
Artificial IntelligenceElon MuskTech Innovation