Skip to main content

NPR Host Sues Google Over AI Voice That Sounds 'Eerily Like Me'

Radio Veteran Claims AI Stole His Vocal Identity

David Greene, the familiar voice that's greeted millions of NPR listeners during his tenure hosting "Morning Edition," is taking Google to court over what he calls "digital identity theft." The veteran broadcaster alleges that Google's NotebookLM AI note-taking tool features a synthetic male voice that replicates his distinctive speech patterns with unsettling accuracy.

Image

"They Captured My Vocal Fingerprint"

Greene, who now hosts KCRW's "Left, Right & Center," began receiving puzzled messages shortly after NotebookLM launched its podcast feature. "Friends kept asking why I was recording for Google," he told reporters. "When I listened myself, I got chills - it wasn't just similar, it replicated my speech quirks perfectly."

The broadcaster specifically points to subtle vocal trademarks: his characteristic pauses, the way he stresses certain syllables, even his habitual use of filler words like "um." "My voice isn't just sound waves," Greene insists. "It's how audiences have known me for twenty years."

Google Denies Voice Theft Allegations

Google representatives swiftly countered Greene's claims in a statement to The Washington Post. "The NotebookLM voice was recorded by professional actors under standard industry contracts," a spokesperson explained. "Any resemblance to Mr. Greene is coincidental."

The tech giant maintains its audio samples come from original recordings made specifically for the project. However, they declined to identify the actor behind the contested voice or provide documentation of the recording sessions.

Hollywood Déjà Vu

This legal skirmish echoes recent tensions between AI developers and entertainment professionals. Just months ago, OpenAI found itself in hot water when users noticed ChatGPT's "Sky" assistant bore striking similarities to Scarlett Johansson's sultry tones from Her. The actress publicly criticized what she called "digital impersonation," prompting OpenAI to remove the controversial voice option.

Media law experts see these cases as early skirmishes in what promises to be a prolonged battle over vocal rights in the AI era. "We're entering uncharted legal territory," notes UCLA media professor Daniel Stein. "Current laws weren't written with synthetic voices in mind."

Key Points:

  • Voice as identity: Greene argues his distinctive speech patterns constitute intellectual property
  • Industry precedent: The case follows Scarlett Johansson's dispute with OpenAI over voice similarity
  • Legal gray area: Existing laws offer limited protection against AI voice replication
  • Tech response: Google maintains it used properly licensed actor recordings

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

ChatGPT Faces User Exodus Amid Military AI Controversy
News

ChatGPT Faces User Exodus Amid Military AI Controversy

ChatGPT saw a staggering 295% spike in U.S. uninstalls after OpenAI's defense deal became public, while rival Claude gained traction by refusing similar partnerships. The backlash highlights growing consumer concerns about AI ethics in military applications.

March 3, 2026
AI ethicsChatGPTmilitary technology
News

ChatGPT Exodus: Users Flee After Military Deal

OpenAI's partnership with the U.S. Department of Defense sparked a massive backlash, with ChatGPT app uninstalls jumping 295% overnight. Rival Claude saw downloads surge as users protested the military collaboration through app store reviews and downloads. The dramatic shift highlights growing public concern about AI's role in defense applications.

March 3, 2026
ChatGPTAI ethicstech backlash
News

OpenAI Strikes Military Deal With Built-In Safeguards

In a move that follows Anthropic's clash with the Pentagon, OpenAI has secured an agreement allowing its AI models on classified defense networks—but with strict conditions. CEO Sam Altman emphasized protections against mass surveillance and autonomous weapons, while revealing engineers will embed technical safeguards directly into Pentagon systems. The deal sparks debate within OpenAI as employees voice support for Anthropic's tougher stance.

March 2, 2026
AI ethicsmilitary techOpenAI
News

Tech Workers Unite Against Military AI: Google and OpenAI Staff Back Anthropic's Ethical Stand

In a rare show of solidarity across corporate lines, hundreds of employees from Google and OpenAI have publicly supported Anthropic's refusal to develop unrestricted military AI applications. The workers signed an open letter warning against autonomous weapons development, revealing tensions between Silicon Valley's ethical commitments and government pressure. As Anthropic faces potential sanctions for its stance, the tech community grapples with defining boundaries for artificial intelligence.

February 28, 2026
AI ethicsmilitary technologytech worker activism
News

Pentagon Threatens Legal Action Against Anthropic Over AI Tech Standoff

The U.S. Defense Department is locking horns with AI company Anthropic in a high-stakes battle over military access to advanced artificial intelligence. Defense Secretary Pete Hegseth has issued an ultimatum: share your technology by Friday or face legal action under the Defense Production Act. Anthropic remains defiant, threatening to walk away from a $200 million contract rather than compromise its ethical principles against weaponizing AI.

February 25, 2026
AI ethicsDefense technologyGovernment regulation
News

Your LinkedIn Photo Might Predict Your Paycheck, Study Finds

A provocative new study reveals AI can analyze facial features in LinkedIn photos to predict salary trajectories with surprising accuracy. Researchers examined 96,000 MBA graduates' profile pictures, linking AI-detected personality traits to career outcomes. While the technology shows promise, experts warn it could enable dangerous workplace discrimination masked as 'objective' assessment.

February 11, 2026
AI ethicsworkplace discriminationhiring technology