Stay informed with our newsletter.

Icon
Technology & Science
May 20, 2024

Scarlett Johansson Takes Legal Action Over Unauthorized ChatGPT Voice Mimicry That ‘Shocked and Angered’ Her

Scarlett Johansson is pursuing legal action after discovering an unauthorized mimicry of her voice by ChatGPT, which left her shocked and angered. The actress's response highlights concerns over the use of AI in replicating voices without consent.

New York CNN —Actor Scarlett Johansson shared in a statement with CNN on Monday that she was "shocked, angered, and in disbelief" that OpenAI CEO Sam Altman would use a synthetic voice in a ChatGPT update that sounded "so eerily similar" to hers.

This statement follows OpenAI's decision to pause the update after comparisons were made to a fictional voice assistant Johansson portrayed in the film "Her."

OpenAI's retreat came after backlash against the artificial voice, Sky, which critics described as overly familiar and resembling a male developer’s fantasy, mocked for its flirtatious tone.

“We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI posted on X Monday. “We are working to pause the use of Sky while we address them.”

Johansson revealed that Altman had offered her a role last September to voice the ChatGPT 4.0 system, which she declined for “personal reasons.”

“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.”

Johansson hired legal counsel, and OpenAI "reluctantly agreed" to take down the "Sky" voice after receiving two letters from her counsel.

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected,” Johansson wrote.

The voice in question is not derived from Johansson’s, the company said in a blog post Sunday, but instead “belongs to a different professional actress using her own natural speaking voice.”

Altman reiterated in a statement Monday, following Johansson’s claims, that “Sky” was voiced by a different actress.

"The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” Altman stated. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We apologize to Ms. Johansson for our poor communication."

OpenAI explained that each AI voice is designed to create “an approachable voice that inspires trust,” with a “rich tone” that is “natural and easy to listen to.” The ChatGPT voice mode featuring Sky’s voice had not yet been widely released, but product announcement videos and teasers showing OpenAI employees interacting with it went viral online last week.

Some listeners criticized Sky's voice as being perhaps too pleasant. Last week, the controversy was highlighted on The Daily Show, where senior correspondent Desi Lydic described Sky as a “horny robot baby voice.”

“This is clearly programmed to feed dudes’ egos,” Lydic said. “You can really tell that a man built this tech.”

HER, Joaquin Phoenix, 2013. Warner Bros. Pictures/courtesy Everett Collection

Even Altman seemed to acknowledge the widespread comparisons to Johansson when he posted "her." on X on the day of the product’s announcement. Johansson claimed this post insinuated “the similarity was intentional.”

“Her” is the title of the 2013 film in which Johansson voices an artificially intelligent assistant. In the film, the protagonist, played by Joaquin Phoenix, falls in love with the AI, only to be heartbroken when she admits she is also in love with hundreds of other users and later becomes inaccessible altogether.

Questions about leadership

The criticism surrounding Sky underscores broader societal concerns about potential biases in technology, especially when developed by tech companies predominantly led or funded by White men.

This controversy emerged after OpenAI leaders were compelled to defend their safety practices following a departing employee’s allegations. Jan Leike, who previously led a team focused on long-term AI safety, left OpenAI last week along with Co-Founder and Chief Scientist Ilya Sutskever. On Friday, Leike posted a thread on X, claiming that "over the past years, safety culture and processes have taken a backseat to shiny products" at OpenAI. He also expressed concerns that the company was not allocating sufficient resources to prepare for a possible future "artificial general intelligence" (AGI) that could surpass human intelligence.

Altman responded quickly, expressing appreciation for Leike’s commitment to "safety culture" and acknowledged, "He’s right we have a lot more to do; we are committed to doing it." OpenAI confirmed to CNN that it had recently begun dissolving Leike’s team and integrating its members across various research groups to better achieve its safety objectives. A company spokesperson explained that this new structure would enhance OpenAI’s ability to meet its safety goals.

OpenAI President Greg Brockman addressed the issue in a longer post on Saturday, co-signed by Altman, detailing the company’s approach to long-term AI safety.

"We have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it," Brockman stated. "We’ve repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks."

He added that as AI becomes smarter and more integrated into daily life, the company is focused on maintaining "a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities."

Source: CNN

Stay informed with our newsletter.