OpenAI’s o1-pro is the most expensive AI model in the industry
No other model by a major AI company comes even close.
OpenAI just released the pricing for its o1-pro reasoning model, which draws on an insane amount of computing power to use multistep “reasoning” through problems to get better responses to prompts. This computing power doesn’t come cheap: the new pricing is the highest for any major model in the industry today, and by a lot.
As a regular human user, you can use a lot of AI tools for free, but maybe you pay $20 per month for OpenAI’s ChatGPT Plus or Google’s Gemini Advanced if you use it a lot. But that’s not where the money is.
When companies are hooking their services up to AI platforms behind the scenes via an API (application programming interface), the costs can really add up. So what is the standard unit of measure for AI costs?
API pricing for AI models is measured by how much data (words, images, video, audio) you put into a model and how much data gets spit back out to you. The output costs more than the input.
The common measure for this is 1 million “tokens.” In AI parlance, a “token” is like an atomic unit of data. When text is input into a model, the words and sentences get broken down into these tokens for processing, which could be a few letters. For OpenAI’s models, one token is roughly four characters in English. So a paragraph is about 100 tokens, give or take.
For a million tokens, think Robert Caro’s epic biography of Robert Moses, “The Power Broker” — which I’m currently halfway through — a 2.3-pound, 1,300-page beast of a book. A rough estimate of this tome comes out to about 850,000 tokens.
If you put 1 million tokens into some of the leading models today, you could probably pay for it with just a few coins. For OpenAI GPT-4o Mini, the input would cost you only $0.15, while the output would cost $0.60. Google’s Gemini 2.0 Flash would cost you a single penny for the input and $0.04 for the output.
OpenAI o1-pro’s pricing for 1 million tokens of input is $150, and $600 for the output.
In a tweet announcing the pricing, OpenAI wrote, “It uses more compute than o1 to provide consistently better responses.”
It’s worth pointing out that there are huge differences in the capabilities of these models — some are very small and built for specific use cases like running on a mobile device, and others are massive for advanced tasks, so differences in prices are to be expected. But as you can see from the chart, OpenAI’s pricing stands apart from the crowd.
Pricing is a key issue for OpenAI as it struggles to find a viable business model to cover the enormous costs of running these services. The company’s recent pivot to release only “reasoning” models like o1-pro going forward means much higher computing costs, as evidenced by the cost of solving individual ARC-AGI puzzles for $3,400 apiece.
Recently, The Information reported that OpenAI was considering charging $20,000 per month for “PhD-level agents.”
CEO Sam Altman said in January that OpenAI is losing money on its ChatGPT Pro product.
insane thing: we are currently losing money on openai pro subscriptions!
— Sam Altman (@sama) January 6, 2025
people use it much more than we expected.
The company is reportedly raising money at a valuation of $340 billion, and in 2024 it was reported to have lost about $5 billion, after bringing in only $3.7 billion in revenue.