Power
Construction and Workers
(CSA-Printstock/Getty Images)

White House unveils “America’s AI Action Plan”

A sweeping plan for government-backed AI may sweep aside state regulations.

The Trump administration wants to usher in “a new golden age of human flourishing” powered by AI that the government ensures will be folded into every part of our lives.

“AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy — an industrial revolution. It will enable radically new forms of education, media, and communication — an information revolution. And it will enable altogether new intellectual achievements: unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art — a renaissance.”

The White Houses grand plan for the US to fully embrace AI, announced on Wednesday, calls for the nascent technology to be deeply integrated across all corners of government and society. 

Based on three conceptual pillars — accelerating innovation, building AI infrastructure, and diplomacy and security — the 23-page document is a blindingly bright green light for Americas tech sector to create a “try-first” culture for AI across US industry. 

The ambitious plan identifies many ways that federal agencies can partner with US tech companies to guarantee the countrys global dominance in AI (and ensure healthy streams of federal revenue along the way). 

It even calls for government “priority access to computing resources” during national emergencies or significant conflicts. 

But while the initiative signals full speed ahead for using AI in pretty much every facet of the US government, it also awkwardly acknowledges the current dim understanding of how AI works and what potential risks there might be. 

“Today, the inner workings of frontier AI systems are poorly understood. Technologists know how LLMs work at a high level, but often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system. This lack of predictability, in turn, can make it challenging to use advanced AI in defense, national security, or other applications where lives are at stake.” 

Plans for thwarting state regulation

While an audacious proposal to impose a 10-year ban on state regulation of AI was killed in Congress via amendments to President Trump’s massive tax bill, the federal government is sketching out a strategy for how to deal with any pesky sub-federal regulations that may slow the roll of AI. (They’re looking at you, California.)

The document outlines that federal AI dollars should be withheld from “states with burdensome AI regulations,” while saying it should not interfere with states’ “prudent laws” that don’t restrict innovation. 

The plan directs the Office of Management and Budget to “consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”

The Federal Communications Commission is directed to “evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.’’

Dont let our enemies get our AI tech, unless we sell it to them

“America currently is the global leader on data center construction, computing hardware performance, and models. It is imperative that the United States leverage this advantage into an enduring global alliance, while preventing our adversaries from free-riding on our innovation and investment.”

The plan confronts a tricky balancing act that has captured the attention of investors backing AI-adjacent companies. 

The government considers Americas leading AI tech to be essentially national assets that should be kept from our enemies and shared with our friends. But the American companies making this tech (like market leader Nvidia) dont want to miss out on a massive market like China

The document equivocates on this point and, with words you could imagine Nvidia CEO Jensen Huang whispering to Trump at Mar-a-Lago, encourages “creative approaches to export control enforcement.”  

Build, baby, build

Despite hundreds of billions of Big Tech dollars flowing into the ever-larger data center projects that are under construction around the US at breakneck speeds, its not fast enough for the AI advocates in the Trump administration. 

Declaring that Americas regulatory permitting processes make it “almost impossible to build this infrastructure in the United States with the speed that is required,” the plan suggests removing any and all guardrails for these data centers, despite environmental concerns

The plan notes the massive amounts of electricity that the aging US grid must accommodate for AI data centers, but while it calls for “new sources of energy to power it all,” there is not a single reference to “renewable energy” in the document, such as solar or wind (though it does reference geothermal power).

Many of the biggest data center projects — such as those being built by Meta— include the creation of new renewable energy plants for the communities where they are built. Trumps distaste for solar and wind sources of power are reflected in the document. 

The initiative also prioritizes domestic chip manufacturing, which was a key accomplishment of the Biden administration. The text mentions the “revamped” CHIPS Program Office, which it says should “continue focusing on delivering a strong return on investment for the American taxpayer.”

Rapid retraining

One of the biggest fears of the effect of widespread AI adoption is the potential for massive disruptions to the labor force, as some jobs are lost to automation.

The plan tasks the Department of Labor with addressing this serious challenge. It outlines a “worker-first” AI agenda, which incentivizes AI literacy and skills development, as well as a mandate to “fund rapid retraining for individuals impacted by AI-related job displacement.” The DOL will also have to pilot new approaches to “shifting skill requirements for entry-level roles.”

But while a robot may take your job, it won’t likely face much regulation. The text encourages clearing obstacles for America to lead in the manufacturing of “autonomous drones, self-driving cars, robotics, and other inventions for which terminology does not yet exist.”

NIST’s many roles

There are many calls for the National Institutes of Standards and Technology (NIST) to do a lot of the work in this plan. Among its responsibilities:

  • “Publish evaluations of frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship.”

  • “Revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion [DEI], and climate change.”

  • Measure AI productivity at “realistic tasks” in various industries.

  • Create “automated cloud-enabled” labs for AI-powered scientific testing.

  • Build an AI evaluations ecosystem, and “support development of the science of measuring and evaluating AI models.”

  • Assess deepfake evaluation systems to protect the courts and law enforcement from AI-generated evidence.

“A missed opportunity”

Samir Jain, the VP of policy at the nonprofit Center for Democracy and Technology, took issue with many of the initiatives described in the plan, such as the removal of references to climate change and DEI from NIST’s AI Risk Management Framework. Jain told Sherwood News in an emailed statement:

“The government should not be acting as a Ministry of AI Truth or insisting that AI models hew to its preferred interpretation of reality.”

Jain praised the plan’s calls for improved AI evaluation systems, open-source and open-weight AI models, and focus on AI security, but on the whole described the proposal as a missed opportunity.

“Ultimately, the Plan is highly unbalanced, focusing too much on promoting the technology while largely failing to address the ways in which it could potentially harm people.”

More Power

See all Power
US-POLITICS-CONGRESS-AI

Anthropic sues the US government

In response to the Pentagon’s unprecedented, punitive determination that Anthropic is a national security supply chain risk, the AI startup has sued the US government.

power

OpenAI is reportedly working with Pentagon to hash out guardrails amid Anthropic standoff over AI safety

OpenAI CEO Sam Altman said the company is working with the Pentagon to negotiate safety guardrails for AI models used in the battlefield, which comes as one of its top competitors, Anthropic, is at a standoff with the government.

According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”

Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.

Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.

According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”

Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”

Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.

Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.

power
Jon Keegan

Report: Anthropic CEO Amodei meeting with Hegseth at the Pentagon as tensions mount

Anthropic CEO Dario Amodei has been summoned to meet with Defense Secretary Pete Hegseth at the Pentagon on Tuesday, according to a report from Axios. Tensions are running high between the Trump administration and Anthropic, as the startup’s surveillance restrictions on the use of its AI are reportedly causing outrage within the Pentagon.

Last month’s attack on Venezuela that led to the capture of Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Per the report, the Pentagon is considering effectively blacklisting Anthropic’s AI from government work if it doesn’t capitulate to the administration’s terms.

Antagonizing the Trump administration could cause Anthropic to face potential regulatory hurdles as it races toward an IPO this year. The company recently hired former Microsoft CFO Chris Liddel to its board, who formerly served as deputy White House chief of staff in the first Trump administration.

Last month’s attack on Venezuela that led to the capture of Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.

Per the report, the Pentagon is considering effectively blacklisting Anthropic’s AI from government work if it doesn’t capitulate to the administration’s terms.

Antagonizing the Trump administration could cause Anthropic to face potential regulatory hurdles as it races toward an IPO this year. The company recently hired former Microsoft CFO Chris Liddel to its board, who formerly served as deputy White House chief of staff in the first Trump administration.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, Robinhood Derivatives, LLC, or Robinhood Money, LLC. Futures and event contracts are offered through Robinhood Derivatives, LLC.