WASHINGTON, Feb 25 — The US Defence Department has given AI company Anthropic until Friday to agree to unrestricted military use of its technology or face being forced to comply under emergency federal powers, a senior official said Tuesday.
Anthropic chief executive Dario Amodei met personally with Defence Secretary Pete Hegseth at the Pentagon on Tuesday, with the company saying he “expressed appreciation for the Department’s work and thanked the Secretary for his service.”
At the heart of the conflict is Anthropic’s refusal to let its Claude models be used for the mass surveillance of US citizens or in fully autonomous weapons systems.
“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” the company said in a statement.
But after the meeting, the Pentagon delivered a stark ultimatum: agree to unrestricted military use of its technology by 5.01pm Friday or face being forced to comply under the Defence Production Act.
The Cold War-era law, last used during the Covid pandemic, grants the federal government sweeping powers to compel private industry to prioritize national security needs.
The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company’s ability to work with the US government and reputation.
The senior Pentagon official pushed back on the company’s concerns, insisting the Defence Department had always operated within the law.
“Legality is the Pentagon’s responsibility as the end user,” the official said, adding that the department “has only given out lawful orders.”
Officials also confirmed that an exchange regarding intercontinental ballistic missiles had taken place between Anthropic and the Pentagon, underscoring the sensitivity of the applications at the heart of the dispute.
The Pentagon confirmed that Elon Musk’s Grok system had been cleared for use in a classified setting, while other contracted companies — OpenAI and Google — were described as close to similar clearances, piling competitive pressure on Anthropic to fall in line.
Anthropic was contracted alongside those companies last year to supply AI models for a range of military applications under a US$200 million agreement.
Anthropic was founded by former OpenAI employees in 2021 on the premise that AI development should prioritise safety — a philosophy that now puts it on a collision course with the Pentagon and the White House. — AFP






