OpenAI's ChatGPT Natural Voice Conversations Have Arrived

Key Takeaways

  • OpenAI has expanded the Advanced Voice Mode (AVM) for ChatGPT to Plus and Team subscribers, with Enterprise access coming soon.
  • Five new voices have been added, improving emotional recognition and letting users interrupt mid-response.
  • The feature is not available in some regions and for free users, and it's competing with Google’s Gemini Live.

  • OpenAI's ChatGPT Natural Voice Conversations Have Arrived

    Imagine talking to AI just like you would with a friend, where the conversation flows naturally, and it even picks up on your mood. That’s exactly what OpenAI’s new Advanced Voice Mode (AVM) brings to ChatGPT. Now available to a wider range of paid users—including Plus and Team subscribers—this update makes conversations with AI feel more real, dynamic, and personal.

    Starting soon, even Enterprise and educational customers will get access to AVM, which could be a game changer for anyone who loves using ChatGPT for hands-free conversations. The moment AVM becomes available, users will get a notification in their app. But don’t worry if you don’t see it right away—it’s rolling out gradually. Unfortunately, if you’re in regions like the EU, UK, or Switzerland, you might have to wait a little longer, as AVM isn’t available there yet. And, for now, free users will miss out on this cool feature.


    What Makes ChatGPT’s Advanced Voice Mode Special?

    This feature transforms ChatGPT into a more interactive and emotionally aware companion. Imagine being able to stop the AI mid-sentence or having it recognize how you’re feeling based on your voice. If you sound stressed or happy, it will adjust its response to match your tone. The conversations flow more naturally, and responses come quicker than before. You even have the option to pick from personalized voice modes that reflect different styles of speaking, and the AI can now pronounce non-English words much better!

    Five new voices have joined the existing lineup—Arbor, Sol, Maple, Vale, and Spruce. These names reflect the calming, natural vibe that OpenAI wants these voices to bring to your interactions. Now, there are nine voices in total, including Juniper, Breeze, Ember, and Cove. One voice, Sky, has been removed after Scarlett Johansson raised concerns that it sounded too much like her. While OpenAI says Sky was actually voiced by another actress, the voice has been paused for now.

    AVM’s journey started back in May when it was first shown as part of GPT-4o, though it didn’t officially launch until July. At first, only a select group of users had the chance to try it, but now, with the wider rollout, everyone with access can enjoy it. Along with the feature, the design has gotten a refresh—what used to be black dots representing AVM in May is now a blue animated sphere, making the experience feel more alive and responsive.

    Back in August, AI advisor Allie Miller gave an exciting demo of AVM, showing off its impressive capabilities and leaving many excited for what’s next.

    To ensure safety, OpenAI has had external experts test AVM, especially since its release in July. However, being a closed-source model, it’s still difficult for independent researchers to fully assess its safety and bias.


    Competing with Google’s Gemini Live

    Of course, OpenAI’s AVM isn’t the only advanced AI voice tool in the game. Google’s Gemini Live, launched in mid-August, offers tough competition. Gemini Live boasts 10 voice options and lets users manage tasks through Google apps with just their voice. Plus, hands-free chatting makes it ideal for multitasking. While it’s currently available to Advanced Android users, there are plans to expand to iOS and other languages soon.

    As voice AI technology becomes more advanced, it’s exciting to see how tools like ChatGPT’s AVM and Google’s Gemini Live will continue to improve our daily lives. Stay tuned for even more features and updates, as this is just the beginning of more natural, emotion-driven AI interactions!