OpenAI releases its long-awaited flagship model, GPT-5
Researchers said the new AI model is “significantly less deceptive” than prior models as the company tries to shift expectations from the giant leaps in performance seen earlier in the AI boom.
Looking back to November 2022, when ChatGPT was released to the world as a technical preview, it’s dizzying to think of the incredible progress — and the hundreds of billions of dollars — that AI startups and Big Tech companies have spent to rapidly train and improve their ever-larger language models in a furious race to the top of benchmark leaderboards.
Today, OpenAI released its latest flagship large language model, GPT-5, and it might mark the end of the first explosive wave of generative AI and the start of a new era where gains are measured in different ways.
Details leaked overnight, but today in a webcast, OpenAI cofounder and CEO Sam Altman debuted the new model, which comes in four flavors:
GPT-5: “Designed for logic and multi-step tasks.”
GPT-5-mini: “A lightweight version for cost-sensitive applications.”
GPT-5-nano: “Optimized for speed and ideal for applications requiring low latency.”
GPT-5-chat: “Designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.”
The new models will be available to free, Plus, Pro, and Team users today. ChatGPT Enterprise and Edu users will gain access on August 14, according to OpenAI’s website. Executives highlighted GPT-5’s strengths in code generation, math, physics, healthcare, and the ability to dive deeper into a problem when needed without the user having to choose that ahead of time. OpenAI said that “improving factuality” was a priority for GPT-5, and hallucinations have been reduced.
For safety, a new technique called “safe completions” has been introduced to make sure GPT-5 can decline to answer potentially harmful responses while still being helpful. Researchers also highlighted the work to reduce the ability of ChatGPT to deceive its user, saying that the new model is “significantly less deceptive” than prior models.
On the livestream, an OpenAI researcher announced that with the release of GPT-5, the company will be deprecating all of the previous GPT models. Taya Christianson, a spokesperson for OpenAI, told Sherwood News that to prevent users from having to pick the right model to use, GPT-5 will be the default. “Models will remain available for Pro, Team, Enterprise, and Edu tiers for the next 60 days, after which GPT-5 will be the default,” Christianson said.
Among the new features for ChatGPT include customizing the colors of the chat interface and an early “research preview” of chat personalities, to be tailored to a user’s needs.
OpenAI’s website described the major release:
“GPT-5 is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It handles complex coding tasks with minimal prompting, provides clear explanations, and introduces enhanced agentic capabilities, making it a powerful coding collaborator and intelligent assistant for all users.”
End of the old playbook?
The original ChatGPT release (powered by GPT-3.5) in 2022 sparked other tech companies to follow a proven playbook for how to compete in a brand-new industry with few rules — to win, you needed a bigger, smarter, more capable model that could notch gains on AI leaderboards and achieve record scores on benchmark tests.
The formula seemed deceptively simple: more training data, plus more Nvidia GPUs, equals a smarter model. And it worked, at least for a while.
The horse race that resulted saw OpenAI go up against Anthropic’s Claude, xAI’s Grok, Google’s Gemini, Meta’s Llama, and dozens of other models. Novel generative-AI features like Midjourney’s text-to-image generation, conversational speech mode, and code-writing assistants like Microsoft’s GitHub Copilot were dropping daily, and the sky appeared to be the limit.
But last year, researchers started seeing smaller and smaller gains when training their giant models, and talk of an AI “plateau” started to emerge. Tech companies had already pledged hundreds of billions in capital expenditures to build jumbo and mega-super-jumbo data centers, in anticipation of training bigger and bigger models.
If generative AI was hitting a plateau, investors might have some questions about the rush to build out all of this extremely expensive infrastructure, and without a profitable business model.
DeepSeek disrupts
The end of 2024 saw Chinese AI startup DeepSeek release its R1 “reasoning” model, which matched or beat the top state-of-the-art AI models in some areas. What caught everyone’s attention was that DeepSeek researchers were using older, slower chips (due to US export controls) to train the model for about $5.6 million — a fraction of what Western tech companies were shoveling into their gargantuan data centers at breakneck speed.
News of the open-source model throttled tech stocks as investors reconsidered Nvidia’s place in the white-hot center of the AI universe.
DeepSeek’s entrance into the field did appear to mark a big shift in strategy for the industry. Meta dedicated a team to analyze DeepSeek’s R1 model, and later offered similar reasoning capabilities with its Llama 4 models.
While OpenAI’s o1 model was the first major model to tout “chain of thought reasoning,” released last September, DeepSeek ran with the technique. DeepSeek used a distributed “mixture of experts” scheme where queries are distributed to smaller, specialized models, which was more efficient than one huge, monolithic model, like the ones the industry was racing to build. OpenAI’s odd GPT-4.5 release in February came with lowered expectations, and the announcement that it would be the last “non-reasoning” model the company would make.
Earlier this week, OpenAI released its new open-weight “gpt-oss” model in a large and small size. Releasing the model weights lets anyone run these new models on a laptop for free, and can be further trained for specialized use cases. At the time, Altman posted about GPT-4.5: “This isn’t a reasoning model and won’t crush benchmarks.”
High school, college, Ph.D.... What’s next?
OpenAI stressed in its GPT-5 demonstrations that while benchmarks have been helpful to measure specific improvements, they might be reaching their limits. OpenAI President Greg Brockman said of GPT-5’s high benchmark scores:
“They’re exciting numbers, but we’re starting to saturate them. When you’re moving between 98% and 99% of some benchmark, it means you need something else to really capture how great the model is. And one thing we’ve done very differently with this model is really focus on not just these numbers, but really on real-world application, it being really useful to you in your daily workflow.”
On the livestream, Altman positioned GPT-5 as a significant evolution from the company’s previous GPT models, and said it was far along on its academic journey:
“GPT-3 was sort of like talking to a high school student. There were flashes of brilliance, lots of annoyance, but people started to use it and get some value out of it. With GPT-4, maybe it was like talking to a college student — real intelligence, real utility. But with GPT-5, now it’s like talking to an expert, a legitimate Ph.D.-level expert in anything, any area you need, on demand that can help you with whatever your goals are.”
Updated at 2:45 p.m. ET on August 7 to include comment from OpenAI.