OpenAI has secured a significant agreement with the Pentagon, allowing its artificial intelligence technologies to be integrated into the military’s classified systems. The announcement came just hours after the Trump administration directed federal agencies to stop using AI products developed by rival company Anthropic, marking a notable shift in the AI competition landscape.
The deal was confirmed on Friday by OpenAI CEO Sam Altman, who stated that the agreement includes safety measures similar to those previously sought by Anthropic during its negotiations with the government. Those safeguards reportedly focus on limiting autonomous weapon deployment and preventing the use of AI for mass surveillance activities.
Altman highlighted two key principles underpinning the partnership: a strict ban on domestic mass surveillance and the requirement for human oversight in any military use of force, including systems involving autonomous weapons. He stated that these principles align with existing laws and policies and are formally incorporated into the agreement. OpenAI also plans to assign engineers to work directly with Pentagon teams to ensure secure and responsible implementation of its AI models.
Meanwhile, tensions between OpenAI and Anthropic have intensified. Anthropic has reportedly challenged the government’s classification of it under a “supply chain risk” designation, arguing that such labeling is typically applied to companies linked to foreign security threats. The company intends to contest the decision legally.
Although both companies appear to have similar safety-related conditions in their discussions with authorities, the exact distinctions between their agreements remain unclear as neither the Pentagon nor OpenAI has disclosed full details. Pentagon officials have expressed support for the collaboration, describing reliable AI partnerships as crucial in advancing military technology in the AI era
Also Read: OpenAI Questioned About its Safety Rules




