OpenAI’s Altman: Good news: GPT-4.5 has been released! Bad news: It’s huge and expensive
A few weeks ago, CEO Sam Altman announced a “roadmap update” for OpenAI’s upcoming AI model releases, to address how confusing the company’s model and product offerings have become.
Today, the company announced that it’s released its next model, GPT-4.5, as a “research preview” for developers and Pro users.
But you get the sense from the announcement that it is kind of a weird release for the company. In a tweet announcing the release, Altman wrote:
“good news: it is the first model that feels like talking to a thoughtful person to me... bad news: it is a giant, expensive model.”
Altman even apologized for running out of GPUs, which will delay the model’s wider release. Altman said there are “hundreds of thousands coming soon, and i’m pretty sure y’all will use every one we can rack up.”
In a video announcing its release, research lead Mia Glaese said that it’s OpenAI’s “largest and most knowledgeable model yet.”
And don’t call it a “frontier model.”
Glaese said the GPT-4.5 is “generally useful and inherently smarter,” but Altman has already said this model is the last of its kind. GPT-4.5 doesn’t use multistep “reasoning” like its o3 models and DeepSeek’s R1, despite an emerging consensus in the industry that this technique is the way forward.
This model might be the last one to be created with the method that powered the first wave of generative AI: shoveling larger and larger amounts of data into larger and larger computing clusters (the kind powered by hundreds of thousands of Nvidia GPUs).
This scaling of “unsupervised learning” leads to fewer hallucinations and a more natural tone, the company said. The press release for the model said:
“By scaling unsupervised learning, GPT‑4.5 improves its ability to recognize patterns, draw connections, and generate creative insights without reasoning.”
Next up is GPT-5, which is expected sometime this year.
But don’t expect this model to blow away the competition. Altman said, “this isn’t a reasoning model and won’t crush benchmarks.”
But you get the sense from the announcement that it is kind of a weird release for the company. In a tweet announcing the release, Altman wrote:
“good news: it is the first model that feels like talking to a thoughtful person to me... bad news: it is a giant, expensive model.”
Altman even apologized for running out of GPUs, which will delay the model’s wider release. Altman said there are “hundreds of thousands coming soon, and i’m pretty sure y’all will use every one we can rack up.”
In a video announcing its release, research lead Mia Glaese said that it’s OpenAI’s “largest and most knowledgeable model yet.”
And don’t call it a “frontier model.”
Glaese said the GPT-4.5 is “generally useful and inherently smarter,” but Altman has already said this model is the last of its kind. GPT-4.5 doesn’t use multistep “reasoning” like its o3 models and DeepSeek’s R1, despite an emerging consensus in the industry that this technique is the way forward.
This model might be the last one to be created with the method that powered the first wave of generative AI: shoveling larger and larger amounts of data into larger and larger computing clusters (the kind powered by hundreds of thousands of Nvidia GPUs).
This scaling of “unsupervised learning” leads to fewer hallucinations and a more natural tone, the company said. The press release for the model said:
“By scaling unsupervised learning, GPT‑4.5 improves its ability to recognize patterns, draw connections, and generate creative insights without reasoning.”
Next up is GPT-5, which is expected sometime this year.
But don’t expect this model to blow away the competition. Altman said, “this isn’t a reasoning model and won’t crush benchmarks.”