Why is Altman revising OpenAI’s deal with the Department of War?

/ 7 min read
Summary

A rushed announcement, elastic surveillance laws, and a battle over red lines — inside the AI industry’s most consequential national security standoff yet

Sam Altman, CEO, OpenAI
Sam Altman, CEO, OpenAI | Credits: Getty Images

When Sam Altman admitted that OpenAI had “rushed” into announcing its deal with the U.S. Department of War (DoW) just hours after Anthropic got federally blacklisted by the Pentagon, the confession didn't do much in acknowledging a public relations misstep.

ADVERTISEMENT
Sign up for Fortune India's ad-free experience
Enjoy uninterrupted access to premium content and insights.

Altman said the company is working with the DoW “to make some additions in our agreement to make our principles very clear,” after OpenAI signed a deal to deploy its technology in the department’s classified network.

“One thing I think I did wrong: we shouldn't have rushed to get this out on Friday. The issues are super complex and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future,” Altman wrote on X.

The additions to the agreement include that the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals. “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

OpenAI also said that its services and AI models will not be used by Department of War intelligence agencies such as the NSA. “Any services to those agencies would require a follow-on modification to our contract.”

Altman added that there are many things the technology just isn’t ready for, and many areas where there is little understanding of the trade-offs required for safety. “We will work through these, slowly, with the DoW, with technical safeguards and other methods,” he said.

Yet the controversy is not about whether OpenAI wrote the words “no mass domestic surveillance.” It is about what those words mean in U.S. law — and whether they mean anything at all.

Recommended Stories

"The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols," the OpenAI statement said.

A legal phrase with a history

At the heart of the dispute lies a three-word phrase: “any lawful purposes.”

ADVERTISEMENT

The Pentagon reportedly insisted that AI contractors allow the department to deploy their models for all lawful purposes. Anthropic agreed — except for two carve-outs: mass domestic surveillance and fully autonomous lethal weapons.

OpenAI’s contract, by contrast, appears to hinge on compliance with existing U.S. law, including:

Fortune 500 India 2025A definitive ranking of India’s largest companies driving economic growth and industry leadership.
RANK
COMPANY NAME
REVENUE
(INR CR)
View Full List >
  • The Fourth Amendment

  • The National Security Act of 1947

  • The Foreign Intelligence Surveillance Act (FISA) of 1978

  • Executive Order 12333

  • ADVERTISEMENT
  • Applicable DoD directives

  • On paper, that sounds reassuring. In practice, history suggests caution.

    ADVERTISEMENT

    After the September 11 attacks, U.S. intelligence agencies dramatically expanded surveillance programs under precisely these legal authorities. The most prominent revelations came in 2013 when former NSA contractor Edward Snowden disclosed classified documents detailing bulk data collection programs.

    Among them:

    ADVERTISEMENT
    • The NSA’s collection of Verizon customers’ telephone metadata on an “ongoing, daily” basis.

  • The National Security Agency’s PRISM program, which gathered user data from major technology companies.

  • ADVERTISEMENT
  • Widespread surveillance conducted under Executive Order 12333, which permits intelligence collection outside U.S. territory — even if it incidentally captures communications of Americans.

  • These programs were justified internally as lawful. Courts later questioned aspects of them. Congress passed modest reforms, but large-scale intelligence authorities remain intact.

    ADVERTISEMENT

    Critics argue that OpenAI’s reliance on “existing law” therefore offers limited protection. Nearly every major surveillance controversy of the past three decades operated under legal memos affirming compliance with FISA, the Fourth Amendment, or EO 12333. In other words, legality has historically proven elastic.

    The metadata loophole and AI amplification

    One particularly contentious area is the government’s ability to purchase commercially available data. Under current U.S. doctrine, agencies can often buy bulk location records, browsing histories, and association data from private data brokers without obtaining a warrant.

    ADVERTISEMENT

    What changes with AI is scale.

    As Anthropic CEO Dario Amodei argued in his public statement, powerful AI systems can assemble fragmented, individually innocuous data into comprehensive behavioural profiles — automatically and at scale.

    ADVERTISEMENT

    According to Amodei, the department has said it will contract only with AI companies that agree to “any lawful use” and remove such safeguards. However, he said that certain applications fall outside the scope of Anthropic’s agreements. Use cases such as mass domestic surveillance and fully autonomous weapons have never been included in Anthropic’s contracts with the department, and “we believe they should not be included now,” Amodei said.

    Under existing law, buying such data may be permissible. Using AI to synthesise it into predictive surveillance systems may also be technically lawful. But whether that aligns with democratic norms is another matter.

    ADVERTISEMENT

    This is the core of Anthropic’s refusal: the law has not yet caught up with AI’s analytical power.

    Sovereign immunity and the limits of contract law

    Even if OpenAI’s contract contains strong language, another legal doctrine complicates enforcement: sovereign immunity.

    ADVERTISEMENT

    Under U.S. law, the federal government cannot be sued without its consent. While contractors can negotiate terms, once a system is deployed within classified networks, challenging government use becomes extremely difficult.

    Courts traditionally defer to the executive branch in matters of national security. When the government invokes statutory powers or emergency authorities, private contractors have limited leverage.

    ADVERTISEMENT

    The Pentagon reportedly threatened Anthropic with:

    Offboarding from classified systems

    ADVERTISEMENT

    Designation as a “supply chain risk”

    Invocation of the Defense Production Act, which allows the government to compel companies to prioritise national security requirements

    ADVERTISEMENT

    The Defense Production Act has historically been used to mandate production of critical materials in wartime or crisis scenarios. If applied to AI safeguards, it could theoretically compel modification of model restrictions — though such a move would be unprecedented and legally contested.

    “Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei said.

    ADVERTISEMENT

    Autonomous weapons: semantics matter

    The dispute over lethal autonomous weapons is equally nuanced.

    OpenAI’s published language states its technology will not independently direct autonomous weapons “in any case where law, regulation, or Department policy requires human control.”

    ADVERTISEMENT

    That is materially different from Anthropic’s demand to prohibit fully autonomous lethal systems outright, at least until technological reliability improves.

    The distinction hinges on “human responsibility” versus “human oversight.”

    ADVERTISEMENT

    Human responsibility can mean accountability after the fact. Human oversight typically requires a human-in-the-loop before lethal force is deployed.

    Current U.S. Department of Defense directives — including its 2023 autonomy policy update — require “appropriate levels of human judgment.” But they do not categorically ban fully autonomous systems in perpetuity.

    ADVERTISEMENT

    If those policies change, OpenAI’s contract appears to track whatever is legally permitted at the time.

    Again, legality is the guardrail.

    ADVERTISEMENT

    Cloud deployment and technical safeguards

    OpenAI has said that its models will be deployed only via cloud infrastructure and will retain its safety stack, including classifiers to monitor outputs.

    Technically, this allows OpenAI to:

    ADVERTISEMENT
    • Update safeguards

  • Monitor certain uses

  • ADVERTISEMENT
  • Prevent explicitly disallowed behaviours

  • However, experts note limitations.

    ADVERTISEMENT

    Classifiers (AI tools and algorithms) cannot determine whether a single query about a U.S. citizen is part of a broader bulk surveillance operation. Nor can they verify whether a human meaningfully reviewed a lethal targeting decision before execution.

    If the government deems an application lawful, contractual classifiers may not override that determination.

    ADVERTISEMENT

    Cloud deployment also does not preclude involvement in the “autonomous kill chain” — the upstream analytical processes that inform battlefield decisions before a weapon is triggered locally.

    The precedent problem

    This standoff sets a precedent beyond OpenAI and Anthropic.

    ADVERTISEMENT

    Historically, there is no clear example of a technology provider successfully blocking the federal government from using a system for security purposes once contractual access was granted.

    Telecommunications firms that resisted warrantless wiretapping after 9/11 ultimately faced legal and legislative pressure. Many were granted retroactive immunity.

    ADVERTISEMENT

    That history looms large.

    If the government interprets a use as lawful, and courts defer, a contractor’s ethical objection may hold little practical force.

    ADVERTISEMENT

    The public backlash has been swift.

    On February 28, U.S. uninstalls of the ChatGPT app jumped 295% day-over-day, far above its typical monthly average of around 9%. Just a day earlier, downloads had been growing 14%, while OpenAI too said that their userbase has increased to 900 million weekly users, and has 50 million consumer subscribers.

    ADVERTISEMENT

    By Saturday, they had dropped 13%, followed by another 5% decline on Sunday. One-star reviews surged 775% in a single day, then climbed another 100%. Five-star ratings fell by roughly 50% over the same stretch.

    At the same time, Anthropic’s Claude saw downloads rise 37% on Friday and 51% on Saturday, eventually climbing to the number one free app position on Apple’s U.S. App Store. Separate data showed Claude’s daily U.S. downloads were 88% higher than ChatGPT’s at the peak of the shift, marking the first time it had overtaken its rival.

    ADVERTISEMENT

    After checking recent data on Appfigures, Claude is the number one free iPhone app in countries outside the US: Belgium, Canada, Germany, Luxembourg, Norway and Switzerland, and second in countries like the Netherlands and New Zealand. Meanwhile, in India, ChatGPT continues to lead in second place, and Claude is eighth.

    What was the Pentagon’s response to Anthropic’s refusal?

    Later, Pentagon spokesperson Sean Parnell said on X that the department has no interest in using AI to conduct mass surveillance of Americans, nor does it want to use AI to develop autonomous weapons that operate without human involvement. “Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell said. The Pentagon did not immediately respond to a request for comment on Anthropic’s statement.

    ADVERTISEMENT

    “It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” Amodei said. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider,” he added. An Anthropic spokesperson said the company remains “ready to continue talks and committed to operational continuity for the Department and America’s war fighters.” 

    Explore the world of business like never before with the Fortune India app. From breaking news to in-depth features, experience it all in one place. Download Now