The U.S. Pentagon has officially designated artificial intelligence company Anthropic as a “supply-chain risk,” a move that immediately restricts government contractors from using the company’s AI technology in projects tied to the U.S. military. The decision follows months of tension between the Defense Department and Anthropic over limits the AI developer placed on how its technology can be used in military operations.
According to sources familiar with the situation, Anthropic’s flagship AI model, Claude, has been used to support various defense-related activities, including intelligence analysis and operational planning. One source indicated the technology may have been used in connection with military operations involving Iran, though the full details of its deployment remain unclear.
The new designation effectively prevents contractors working with the Pentagon from integrating Anthropic’s AI tools into defense systems or military workflows. While the government has not disclosed the full scope of the restriction, the “supply-chain risk” label is significant because it has historically been applied to foreign technology companies considered potential security threats. Similar actions were previously taken against Chinese telecommunications giant Huawei, which was removed from U.S. defense supply chains.
Anthropic had previously been among the earliest American AI firms to collaborate with U.S. national security agencies. However, the relationship has become strained due to disagreements over the ethical safeguards the company enforces on its AI systems. Anthropic has maintained strict policies prohibiting the use of its Claude AI for autonomous weapons development or large-scale surveillance within the United States.
The Pentagon has reportedly pushed back against these restrictions, arguing that military use of artificial intelligence should be permitted when it complies with existing U.S. laws and defense guidelines. This policy clash ultimately contributed to the recent decision to designate Anthropic as a supply-chain risk.
Anthropic’s AI technology is also embedded in defense-related platforms such as Palantir’s Maven Smart Systems, a software platform used by military organizations for intelligence processing and weapons targeting. The platform reportedly incorporates workflows and prompts built using Claude’s underlying code.
Neither Anthropic nor the Defense Department—referred to by the Trump administration as the Department of War—immediately responded to requests for comment regarding the designation. The move highlights growing tensions between AI companies and governments over how advanced artificial intelligence should be used in national security and warfare.


U.S. Patriot Missiles Redeployed From South Korea Amid Middle East Conflict
Pokemon Pokopia Sells 2.2 Million Copies in Four Days, Boosting Nintendo Switch 2 Momentum
Iran's Government Remains Stable Despite U.S. and Israeli Strikes, Intelligence Shows
Joby Aviation Reaches Major Milestone in FAA Certification for Electric Air Taxi
U.S. Senate Greenlights AI Chatbots for Official Staff Use
Big Tech Turns to Debt Markets to Fund AI Infrastructure Boom
Iran Mines Strait of Hormuz: Crude Oil Prices Surge Amid Middle East Tensions
Nvidia Sets $4M CEO Bonus Target for Fiscal 2027 as AI Demand Drives Revenue Growth
Yann LeCun's AI Startup AMI Raises $1 Billion at $3.5 Billion Valuation
Thomas Mazloum Named Chair of Disney Experiences as Leadership Shakeup Takes Effect
Lindt Posts Record CHF 5.92 Billion in Sales for 2025, Doubles Share Buyback Program
U.S. Calls for Reassessment of International Aid to Taliban-Ruled Afghanistan
Trump Hints at Possible U.S. Takeover of Cuba Amid Deepening Humanitarian Crisis
UK Regulators Demand Social Media Platforms Strengthen Children's Age Verification
Taiwan's MQ-9B SkyGuardian Drone Order Stays on Schedule Despite Middle East Conflict
Costco Faces Class Action Lawsuit Over Tariff Refunds as Supreme Court Strikes Down Trump's IEEPA Tariffs
Amazon Invests $535 Million in Brisbane Robotics Fulfillment Center 



