Skip to main content

Search site

Find podcasts, news, articles, webinars, and contributors in one search.

OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode

Source: Wired

Found this useful? Share it with your network

In late July, OpenAI started releasing a humanlike voice interface for ChatGPT, sparking discussions on AI safety. The newly published “system card” for GPT-4o highlights concerns about users forming emotional attachments and the associated risks. The document outlines various potential issues including societal bias amplification, misinformation dissemination, and misuse in developing harmful substances. The card also details rigorous testing to prevent the AI from acting independently or deceptively. While experts acknowledge OpenAI's transparency, they urge more disclosure on data usage and emphasize the need for ongoing risk evaluation as real-world usage expands.

Read Full Article

Opens on Wired