OpenAI Unveils Its Most Expensive AI Model Yet: O1-Pro
The company claims that O1-Pro is designed to think harder and deliver more accurate answers than the standard O1, thanks to its increased computational power. But there’s a catch—it’s expensive. Very expensive.
For developers eager to test O1-Pro, there’s a price to pay—literally. OpenAI is charging $150 per million tokens (around 750,000 words) for input and $600 per million tokens generated by the model. That’s twice the cost of OpenAI’s GPT-4.5 for inputs and a staggering 10 times more expensive than the original O1 for outputs.
Is O1-Pro Worth the Price?
OpenAI believes that the improved reasoning capabilities of O1-Pro will justify the hefty price tag. In a statement to TechCrunch, a company spokesperson said:
“O1-Pro in the API is a version of O1 that uses more computing to think harder and provide even better answers to the hardest problems. After getting many requests from our developer community, we’re excited to bring it to the API to offer even more reliable responses.”
But is it really that much better? Early users aren’t entirely convinced. Since December, O1-Pro has been available in ChatGPT for Pro subscribers, and feedback has been mixed. Some users noticed that it struggled with basic Sudoku puzzles and got confused by simple optical illusion jokes.
Even OpenAI’s own internal benchmarks from late last year didn’t paint an impressive picture. While O1-Pro performed slightly better than O1 in coding and math tasks, the improvements weren’t groundbreaking. The one real benefit? It delivered answers more consistently.
So, the big question remains—will developers see enough value in O1-Pro to justify its sky-high costs? Or will they stick with more affordable alternatives? Only time will tell.