Ever since the demo, we’ve all been dying to use ChatGPT’s advanced voice mode. But after a few legal hurdles and launch delays, it’s still pretty limited by limitations, missing features, and a few misguided options that don’t make it the movie we thought it was.
ChatGPT Advanced Voice Mode review – Everything you need to know
For the short amount of time OpenAI lets you converse with the new model each day, you can make a fair assessment of its possibilities, problems, and potential. With that in mind, here are my brutally honest thoughts on ChatGPT’s Advanced Voice Mode – what’s great, what’s not, and why the dream of an assistant with a sexy voice is still a few iterations away.
With the release of Advanced Voice Mode for all users (with a ChatGPT account on the mobile app), OpenAI now lets anyone have conversations using its supposedly groundbreaking voice-to-voice model. Free users get no more than 15 minutes of conversation per month, while Plus users get about an hour per day, subject to a changing daily limit based on server capacity. Once that time is up, you’ll have to switch to the much slower and uninspiring Standard voice mode.
But before you start chatting, temper your expectations. You’ll notice that many features shown off during the demo are currently unavailable to both Free and Plus users. The Advanced Voice Mode currently isn’t multimodal and can’t analyze sounds, images, or videos. So it can’t read your paperback or tell you which finger you’re holding up. I also couldn’t get it to sing or tell you what instrument I was playing (guitar). So there are several promised features yet to come.