The U.S. Department of War (DOW) has formally designated artificial intelligence firm Anthropic as a supply-chain risk, marking the first time a domestic company has received such a classification. The decision, effective immediately, prohibits defense contractors from using Anthropic's Claude AI models in Pentagon-related projects, potentially reshaping military-tech partnerships.
The conflict escalated after Anthropic refused to allow its technology to be used for mass surveillance or fully autonomous weapons systems, clashing with Defense Department leadership. A senior DOW official stated the designation reflects concerns over "operational integrity and compliance," though Anthropic CEO Dario Amodei claims political motivations, citing rival OpenAI executives' donations to former President Trump's campaigns.
Despite a government-wide ban ordered last week, multiple reports confirm the military employed Claude AI during recent operations against Iran. Anthropic, backed by investors including Amazon and Nvidia, has vowed to challenge the designation in court—a rare direct legal confrontation between Silicon Valley and Washington.
Negotiations mediated by tech investors reportedly stalled this week, with the White House maintaining that contractors cannot dictate usage terms for national security systems. The outcome could set precedents for AI governance as defense agencies increasingly rely on private-sector innovation.
Reference(s):
cgtn.com








