ADVERTISEMENT

Open AI’s Sam Altman said that the company is working with the U.S. Department of War (DoW) “to make some additions in our agreement to make our principles very clear.” This comes after OpenAI signed a deal with the Department of War deploy technology in the their classified network, right after the Pentagon’s fallout with Anthropic.
The Pentagon wanted Anthropic to agree that its AI model, Claude, could be used for all lawful purposes by the U.S. military. Anthropic refused to drop explicit safeguards that prevent its technology from being used in mass domestic surveillance of Americans or in fully autonomous weapons without human oversight — two specific red lines the company has made part of its safety policy.
In a post on X (formerly Twitter), Altman said that OpenAI is going to amend the deal as they realised they “rushed” into it. “One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future,” Altman wrote.
The OpenAI CEO said that there would be additions to the deal, which includes the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information." The statement is in response to the backlash he and OpenAI faced from American and global users alike right after signing the deal with the Pentagon.
OpenAI also said that their services and AI models will not be used by the Department of War intelligence agencies such as the NSA. “Any services to those agencies would require a follow-on modification to our contract.”
Altman also said that the ChatGPT maker wants to work through democratic processes and that it should be the government making the key decisions about society. “We want to have a voice, and a seat at the table where we can share our expertise and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it).”
He cautioned that there are many things the technology just isn’t ready for, and many areas that there is little understanding of the trade-offs required for safety. “We will work through these, slowly, with the DoW, with technical safeguards and other methods,” Altman said.
“In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms we’ve agreed to,” Altman said in the same post. He had previously supported Anthropic in an “Ask Me Anything” session, where a user asked him about his stance after Defense Secretary Pete Hegseth deemed the Claude maker a “supply chain risk”.
In a statement on February 27, Anthropic called the move “legally unsound” and “a dangerous precedent.” The company also laid the groundwork for a potential legal fight over the use of its software. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” it wrote. “We will challenge any supply chain risk designation in court.”