OpenAI is reportedly working with Pentagon to hash out guardrails amid Anthropic standoff over AI safety
OpenAI CEO Sam Altman said the company is working with the Pentagon to negotiate safety guardrails for AI models used in the battlefield, which comes as one of its top competitors, Anthropic, is at a standoff with the government.
According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”
Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.
Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.
Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.
According to a memo obtained by several media outlets, Altman told staff OpenAI believes “that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
Anthropic, the company behind the AI chatbot Claude, was one of several firms that received a $200 million contract from the Department of Defense for “agentic workflows.”
Since then, tensions between Anthropic and the Pentagon have reportedly risen as the startup insists on surveillance restrictions. The government’s attack on Venezuela last month that led to the capture of President Nicolás Maduro reportedly involved the use of Anthropic’s Claude AI models for planning, which caused the startup to push back on the alleged violation of its terms of use.
Anthropic has until 5:01 p.m. ET on Friday to reach a deal with the Pentagon, which has threatened consequences against the company if it doesn’t allow the government unrestricted use.
Altman’s comments come as the Financial Times reports that executives at Amazon, Google, and Microsoft are being pushed by workers to support Anthropic in its dispute with the Pentagon and adopt similar guardrails as the Claude company in any work they undertake with the US military.