DeepSeek’s $6 million AI model just blew a $1 trillion hole in the market. Here’s the only explainer you’ll need on this “Sputnik moment”
A fast-moving story is shaking up the AI industry in many different ways.
Over the weekend, the DeepSeek AI story really exploded. There are a lot of different aspects to this story that strike right at the heart of the moment of this AI frenzy from the biggest tech companies in the world. Let’s break this complicated but fascinating story down.
To catch you up, Chinese startup DeepSeek released a group of new “DeepSeek R1” AI models, which have burst onto the scene and caused the entire AI industry (and the investors giving them billions to spend freely) to freak out in different ways. These models are free, mostly open-source, and appear to be beating the latest state-of-the-art models from OpenAI and Meta.
Faster, cheaper, better
What makes these models so noteworthy? Unlike OpenAI and Anthropic’s AI models, they are free for anyone to download, refine, and use for any purpose. Meta did a similar thing with its Llama 3 AI model, making it free for anyone to download, modify, and use. DeepSeek’s latest models were actually based off Llama. But there are lots of free models you can use today that are all pretty good.
The big thing that makes DeepSeek’s latest R1 models special is that they use multistep “reasoning,” just like OpenAI’s o1 models, which up until last week were considered best in class. The reasoning process is a bit slower, but it leads to better responses and reveals a “chain of thought” that shows the steps it takes.
DeepSeek is offering up models with the same secret sauce that OpenAI is charging a significant amount for. And OpenAI offers its models only on its own hosted platform, meaning companies can’t just download and host their own AI servers and control the data that flows to the model. With DeepSeek, you can host this on your own hardware and control your own stack, which obviously appeals to a lot of industries with sensitive data.
DeepSeek does offer hosted access to its models, too, but at a fraction of the cost of OpenAI. For example, OpenAI charges $15 per 1 million input “tokens” (pieces of text that get entered into a chat, which could be a word or letter in a sentence). But DeepSeek’s hosted model charges just $0.14 for 1 million input tokens. That’s a jaw-dropping difference if you’re running any kind of volume of AI queries.
Another crazy part of this story — and the one that’s likely moving the market today — is how this Chinese startup built this model. DeepSeek’s researchers said it cost only $5.6 million to train their foundational DeepSeek-V3 model, using just 2,048 Nvidia H800 GPUs (which were apparently acquired before the US slapped export restrictions on them).
For comparison, Meta has been hoarding more than 600,000 of the more powerful Nvidia H100 GPUs, and plans on ending the year with more than 1.3 million GPUs. DeepSeek’s V3 model was trained using 2.78 million GPU hours (a sum of the computing time required for training) while Meta’s Llama 3 took 30.8 million GPU hours.
And this faster, cheaper approach didn’t just result in a model that matched the leaders’ models; in some cases, it beat them. DeepSeek’s R1 models are beating OpenAI o1 in some math and coding benchmarks.
Did we bet on the wrong horse?
So a better, faster, cheaper Chinese AI model just dropped, and it could upend the industry’s big plans for the next generation of AI models. The biggest tech companies (Meta, Microsoft, Amazon, and Google) have been bracing their investors for years of massive capital expenditures because of the consensus that more GPUs and more data leads to exponential leaps in AI model capabilities. Recently, there are signs that this “AI scaling law” may have reached a plateau, and Nvidia’s place at the top of the AI food chain may be in peril.
A lot of the success DeepSeek had was a result of its using other AI models to generate “synthetic data” to train its models, rather than hunting for new stores of human-written texts.
If that bet on zillions of GPUs, Manhattan-size data centers, and hundreds of billions in AI infrastructure investment is wrong, what are we doing here? Cue the massive freak-out in the market today.
Top of the App Store
As if this story couldn’t get any crazier, this weekend the DeepSeek chatbot app soared to the top of the iOS App Store “Free Apps” list. Observers are calling this a “Sputnik moment” in the global race for AI dominance, but there are a lot of things we don’t know.
One thing we do know is that for all of Washington’s freak-out over TikTok leaking Americans’ personal data to China, this AI chatbot is absolutely sending your data to China, and is even subject to Chinese censorship policies. So don’t go asking DeepSeek about Tiananmen Square, the plight of Uyghurs in China, or Taiwan’s pro-democracy movement, and who knows what else.
Fallout
This weekend, The Information reported that inside Meta they’re indeed freaking out, setting up war rooms and rethinking AI strategy.
The new Trump administration is not going to like this, either, as it’s highlighted a vision of American domination of AI and plans to expedite approvals for new power plants and infrastructure to build massive data centers.
It’s unclear how the admin and lawmakers will react to these developments, but events are moving much faster than any branch of government can.