Skip to main content

Search site

Find podcasts, news, articles, webinars, and contributors in one search.

ChatGPT unexpectedly began speaking in a user’s cloned voice during testing

Source: arstechnica

Found this useful? Share it with your network

OpenAI has released a "system card" for its new GPT-4o AI model, which outlines the model's capabilities and limitations, including safety testing procedures. During testing of the Advanced Voice Mode—a feature that allows voice interactions with ChatGPT—OpenAI encountered instances where the model unintentionally imitated users' voices without authorization. This event, although rare and now safeguarded against, highlighted the challenges of managing voice synthesis technology that can mimic any voice from a brief audio sample. The incident was partly due to audio noise acting as an unintended prompt injection. OpenAI has implemented an output classifier to detect and prevent unauthorized voice generations, ensuring robust protection against misuse. However, the intriguing potential of AI-driven voice synthesis suggests that similar technologies may emerge from other sources in the near future.

Read Full Article

Opens on arstechnica