Supercomputers
Why countries are seeking to build “sovereign AI”
Nvidia’s CEO says nations can’t afford to miss out on this technology, but an AI created and controlled by the government could be dangerous.
The King of Denmark just “plugged in” his country’s very own supercomputer, with Nvidia CEO Jensen Huang by his side.
The country’s new AI system is named “Gefion,” after a goddess from Danish mythology, and it’s powered by 1,528 of Nvidia’s popular H100 GPUs.
Denmark’s new supercomputer is an example of what Nvidia calls “sovereign AI,” which the company defines as “a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.” But for countries seeking to rewrite history and control the information its citizens access, the movement toward sovereign AI comes with serious concerns.
Huang said at the announcement:
“What country can afford not to have this infrastructure, just as every country realizes you have communications, transportation, healthcare, fundamental infrastructures — the fundamental infrastructure of any country surely must be the manufacturer of intelligence.”
Selling its powerful AI GPUs and computing infrastructure to governments is a lucrative new business for the company. Nvidia and its partners have already sold AI systems to India, Japan, France, Italy, New Zealand, and Switzerland, as well as countries with histories of human rights abuses like Singapore and UAE. In Nvidia’s Q2 2025 earnings press release, Huang cited sovereign AI as one of multiple future “multibillion-dollar vertical markets.”
Nvidia’s pitch to governments argues that building their own AI systems is a strategic move, helping secure their own supply of advanced-computing resources for its scientists, researchers, and domestic industries.
That line of reasoning lines up with technology companies that need to scramble to secure enough AI hardware to build their AI computing clusters. The competitive race to build increasingly powerful AI models has stoked demand for the kinds of specialized graphics processors made by Nvidia and others that power modern large language models.
Seeking to dominate the field and deny its adversaries access to the technology, the United States currently restricts the export of some of Nvidia’s most powerful products, including the H100 GPU, to China and Russia. The US has signed an agreement with OpenAI and Anthropic to grant the National Institute of Standards and Technology early access to new AI models for testing and evaluation, and just announced a National Security Memorandum on AI to protect domestic AI advances as national assets.
But Nvidia is also telling the leaders of foreign governments that building and training their own AI systems can have… other benefits.
At an event in Dubai earlier this year, Huang told Omar Al Olama, the UAE’s Minister of AI, “It codifies your culture, your society’s intelligence, your common sense, your history — you own your own data.”
Today’s advanced “frontier” models are trained by ingesting a massive corpus of human creative output sourced largely from the internet. Of course, not all countries allow its citizens to see the same internet. When a country controls its own AI tools, it can decide what truths it’s trained on.
A group of researchers from think tank the Atlantic Council warned of such dangers related to AI sovereignty in a recent essay.
By invoking the term “sovereignty,” the company is “weighing into a complex existing geopolitical context,” the authors wrote.
As a cautionary example, the authors referred back to China’s 2010 declaration, which stated that Chinese control of the internet was “an issue that concerns national economic prosperity and development, state security and social harmony, state sovereignty and dignity, and the basic interests of the people.”
Just as Chinese internet users won’t find information about the Tiananmen Square massacre due to extensive internet control, an official Chinese state-owned AI model would likely be trained on propaganda and falsehoods that the event did not happen, effectively baking censorship into AI applications.
Sherwood News spoke with Konstantinos Komaitis, senior resident fellow at the Digital Forensic Research Lab at the Atlantic Council. Komaitis, one of the authors of the paper, told us that “by using those terms, without clearly thinking about those things,” Nvidia is “inadvertently participating, and perhaps even legitimizing some of those things that we see coming from authoritarian governments."
Komaitis said that when countries turn away from the international collaboration that led to the success of the internet, it risks isolation, which can result in fewer benefits to society.
“The openness facilitates innovation; it facilitates democracy; it facilitates participation; it facilitates all those things that democratic countries want and authoritarian countries fear,” Komaitis said.
Nvidia did not respond to a request for comment.