Menu

Search

  |   Politics

Menu

  |   Politics

Search

Trump Administration Proposes Tough AI Contract Rules as Anthropic Blacklisted by Pentagon

Trump Administration Proposes Tough AI Contract Rules as Anthropic Blacklisted by Pentagon.

The Trump administration has introduced sweeping new guidelines for civilian artificial intelligence contracts that could significantly reshape how AI companies work with the U.S. government. According to reports, the draft policy requires AI developers to allow “any lawful” government use of their models if they want to secure federal contracts. The proposal follows a major dispute between the Pentagon and AI company Anthropic, which has now been designated a “supply-chain risk.”

The announcement quickly affected market sentiment across the technology sector. Investors reacted cautiously as concerns grew about stricter regulations targeting AI safety restrictions that may conflict with U.S. national security priorities. By late trading, the Nasdaq 100 had dropped 1.51%, while Microsoft shares fell 0.42% and Alphabet declined 0.78% as analysts evaluated the impact of the General Services Administration’s new “irrevocable license” requirement for AI systems used by federal agencies.

The Pentagon’s decision to classify Anthropic as a supply-chain risk represents one of the most aggressive moves yet against a domestic AI firm. Such a label is typically used for foreign technology companies like Huawei and effectively prevents government contractors from using the company’s products. The designation reportedly stems from a months-long disagreement after Anthropic declined to remove safeguards that limit the use of its Claude AI models for mass domestic surveillance or lethal autonomous weapons.

Secretary of War Pete Hegseth defended the decision, arguing that the United States needs “patriotic” technology partners willing to support lawful government operations without imposing strict ethical limitations. Anthropic, however, claims the designation lacks legal justification and has indicated it plans to challenge the ruling in court. Federal agencies have been given a six-month window to transition away from Anthropic technology.

The proposed GSA rules also introduce new requirements aimed at ensuring ideological neutrality in AI systems used by the government. Under the draft guidelines, companies must avoid embedding partisan or ideological bias into their models and disclose whether their systems have been modified to comply with foreign regulations such as the European Union’s AI Act.

Industry analysts believe the policy could accelerate the separation of U.S. AI infrastructure from international regulatory frameworks. Meanwhile, traders are closely watching OpenAI, which reportedly moved quickly to position itself as a replacement provider for Pentagon AI projects after Anthropic’s exclusion.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.