The U.S. Defense Department has issued a high-stakes ultimatum to artificial intelligence firm Anthropic, demanding it remove restrictions on military applications of its Claude AI models by 5:01 p.m. EST on Friday, February 27, 2026. Failure to comply could trigger emergency federal powers under the Defense Production Act, according to Pentagon officials.
The confrontation escalated Tuesday when Anthropic CEO Dario Amodei met with Defense Secretary Pete Hegseth at the Pentagon. While both sides described the talks as professional, sources confirm the department rejected Anthropic's ethical safeguards prohibiting mass surveillance of U.S. citizens and fully autonomous weapons systems.
"Legality is the Pentagon's responsibility as the end user," a senior defense official stated, emphasizing the government's position that private companies should not impose operational constraints on national security projects. The department has threatened to designate Anthropic as a supply chain risk—a label typically applied to firms from geopolitical rivals—if no agreement is reached.
The standoff highlights growing tensions between AI safety advocates and defense priorities. Anthropic, founded in 2021 by former OpenAI researchers, secured a $200 million military contract last year alongside competitors like Google and OpenAI. However, its insistence on ethical guardrails now places it at odds with the Pentagon's push for unrestricted AI deployment in sensitive areas including nuclear command systems.
With Elon Musk's Grok AI already approved for classified use and other contractors nearing clearance, pressure mounts on Anthropic to align with defense requirements. Observers warn this precedent could reshape the AI industry's relationship with government security agencies worldwide.
Reference(s):
U.S. Defense Dept. gives Anthropic Friday deadline to drop AI curbs
cgtn.com







