“We’ll continue to add more people on a rolling basis and plan for everyone on Plus to have access in the fall,” OpenAI said.
What does Advanced Voice Mode do?
Effectively, it’s a more powerful chatbot that delivers more natural, real-time conversations with a degree of contextual awareness, which means it can understand and respond to emotion and non-verbal cues. It is also capable of processing prompts more swiftly, which significantly reduces the latency within conversations, and lets you interrupt it to get it to change what it says at any time.
OpenAI first demonstrated the new mode in April, when it showed how the tool can recognize different languages simultaneously and translate them in real time. During that demo, employees were able to interrupt ChatGPT, get it to tell stories in different ways, and more. One thing the bot can no longer do is sound like Scarlet Johansson — it now supports only four preset voices in order to prevent it being used for impersonation. OpenAI has also put filters in place to block requests to generate music or other copyrighted audio, reflecting legal challenges raised against song-generating AI firms such as Suno.