Tuesday, April 02, 2024

OpenAI shares preview of new AI voice technology amid rising deepfake concerns

BY JULIA SHAPERO - 04/01/24 


The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. A barrage of high-profile lawsuits in a New York federal court, including one by the New York Times, will test the future of ChatGPT and other artificial intelligence products.
 (AP Photo/Michael Dwyer, File)


ChatGPT maker OpenAI shared a preview of a new artificial intelligence (AI) tool Friday that can generate “natural-sounding speech” and mimic human voices.

The tool, called Voice Engine, requires only “a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker,” OpenAI said in a blog post.

The AI startup highlighted that Voice Engine can provide reading assistance, translate content and offer a voice to those who are nonverbal or suffer from a speech condition. However, OpenAI acknowledged that the tool could bring “serious risks, which are especially top of mind in an election year.”

The company first developed Voice Engine in late 2022 and began privately testing it with a “small group of trusted partners” late last year.

OpenAI emphasized that these partners have agreed to its usage policies, which require explicit and informed consent from the original speaker and prohibit the impersonation of individuals without their consent.

The partners also must disclose that the voices are AI-generated, and any audio generated by Voice Engine features watermarking to help trace its origin, the company noted.

OpenAI said it believes the widespread deployment of any such tool should feature voice authentication to “verify that the original speaker is knowingly adding their voice to the service,” as well as a “no-go voice list” to prevent the creation of voices similar to prominent figures.

The company also recommended that institutions phase out the use of voice-based authentication to access bank accounts and other sensitive information.

And it still appeared somewhat uncertain about whether it would ultimately release the tool more widely.

“We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities,” OpenAI said in the blog post. “Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

The new voice technology comes amid growing concerns about the potential for AI-generated deepfakes to spread election-related misinformation.

Earlier this year, a message imitating President Biden went out to voters in New Hampshire ahead of the January primary election, urging them not to head to the polls.

Steve Kramer, a veteran Democratic operative, later admitted to creating the fake robocalls and said he did so to draw attention to the dangers of AI in politics.

A local Arizona newsletter similarly released an AI-generated deepfake video of Republican Senate candidate Kari Lake last month in order to warn readers “just how good this technology is getting.”

No comments:

Post a Comment