GPT-5: “A legitimate PhD level expert in anything” that sucks at spelling and geography
OpenAI spent a lot of time talking about how “smart” GPT-5 is, yet it failed spectacularly at tasks that a second grader could achieve.
Yesterday, OpenAI spent a lot of time talking about how “smart” its new GPT-5 model was.
OpenAI cofounder and CEO Sam Altman said of GPT-5: “It’s like talking to an expert, a legitimate PhD level expert in anything,” and called it “the most powerful, the most smart, the fastest, the most reliable and the most robust reasoning model that we’ve shipped to date.”
Demos showed GPT-5 effortlessly creating an interactive simulation to explain the Bernoulli effect, diagnosing and fixing complex code errors, planning personalized running schedules, and “vibecoding” a cartoony 3D castle game. The company touted benchmarks showing how GPT-5 aced questions from a Harvard-MIT mathematics tournament and got high scores on coding, visual problem-solving, and other impressive tasks.
But once the public got its chance to kick GPT-5’s tires, some cracks started to emerge in this image of a superintelligent expert in everything.
AI models are famously bad at spelling different kinds of berries, and GPT-5 is no exception.
I had to try the “blueberry” thing myself with GPT5. I merely report the results.
— Kieran Healy (@kjhealy.co) August 7, 2025 at 8:04 PM
[image or embed]
Another classic failure of being bad at maps persisted with GPT-5 when it was asked to list all the states with the letter R in their names. It even offered to map them, with hilarious results:
My goto is to ask LLMs how many states have R in their name. They always fail. GPT 5 included Indiana, Illinois, and Texas in its list. It then asked me if I wanted an alphabetical highlighted map. Sure, why not.
— radams (@radamssmash.bsky.social) August 7, 2025 at 8:40 PM
[image or embed]
To be fair to OpenAI, this problem isn’t unique to GPT-5, as similar failures were documented with Google’s Gemini.
During the livestream, in an embarrassing screwup, a presentation showed some charts that looked like they had some of the same problems.
If this was all described as a “technical preview,” these kinds of mistakes might be understandable. But this is a real product from a company that’s pulling in $1 billion per month. OpenAI’s models are being sold to schools, hospitals, law firms, and government agencies, including for national security and defense.
OpenAI is also telling users that GPT-5 can be used for medical information, while cautioning that the model does not replace a medical professional.
“GPT‑5 is our best model yet for health-related questions, empowering users to be informed about and advocate for their health.”
Why is this so hard?
The reason why such an advanced model can appear to be so capable at complex coding, math, and physics yet fail so spectacularly at spelling and maps is that generative models like GPT-5 are probabilistic systems at their core — they predict the most likely next token based on the volumes of data they have been trained on. They don’t know anything, and they don’t think or have any model of how the world works (though researchers are working on that).
When the model writes its response, you see the thing that has the highest score for what you should see next. But with math and coding, the rules are more strict and the examples it’s been trained on are consistent, so it has higher accuracy and can ace the math benchmarks with flying colors.
But drawing a map with names or counting the letters in a word are weirdly tough, as it requires a skill the model doesn’t really have and has to figure out step by step from patterns, which can lead to odd results. That’s a simplification of a very complex and sophisticated system, but applies to a lot of the generative-AI technology in use today.
That’s also a whole lot to explain to users, but OpenAI boils those complicated ideas down to a single warning below the text box: “ChatGPT can make mistakes. Check important info.”
OpenAI did not immediately respond to a request for comment.