The Trump administration has introduced sweeping new guidelines for civilian artificial intelligence contracts that could significantly reshape how AI companies work with the U.S. government. According to reports, the draft policy requires AI developers to allow “any lawful” government use of their models if they want to secure federal contracts. The proposal follows a major dispute between the Pentagon and AI company Anthropic, which has now been designated a “supply-chain risk.”
The announcement quickly affected market sentiment across the technology sector. Investors reacted cautiously as concerns grew about stricter regulations targeting AI safety restrictions that may conflict with U.S. national security priorities. By late trading, the Nasdaq 100 had dropped 1.51%, while Microsoft shares fell 0.42% and Alphabet declined 0.78% as analysts evaluated the impact of the General Services Administration’s new “irrevocable license” requirement for AI systems used by federal agencies.
The Pentagon’s decision to classify Anthropic as a supply-chain risk represents one of the most aggressive moves yet against a domestic AI firm. Such a label is typically used for foreign technology companies like Huawei and effectively prevents government contractors from using the company’s products. The designation reportedly stems from a months-long disagreement after Anthropic declined to remove safeguards that limit the use of its Claude AI models for mass domestic surveillance or lethal autonomous weapons.
Secretary of War Pete Hegseth defended the decision, arguing that the United States needs “patriotic” technology partners willing to support lawful government operations without imposing strict ethical limitations. Anthropic, however, claims the designation lacks legal justification and has indicated it plans to challenge the ruling in court. Federal agencies have been given a six-month window to transition away from Anthropic technology.
The proposed GSA rules also introduce new requirements aimed at ensuring ideological neutrality in AI systems used by the government. Under the draft guidelines, companies must avoid embedding partisan or ideological bias into their models and disclose whether their systems have been modified to comply with foreign regulations such as the European Union’s AI Act.
Industry analysts believe the policy could accelerate the separation of U.S. AI infrastructure from international regulatory frameworks. Meanwhile, traders are closely watching OpenAI, which reportedly moved quickly to position itself as a replacement provider for Pentagon AI projects after Anthropic’s exclusion.


Foxconn Sees Strong Growth Ahead Despite Limited Impact From U.S.–Israel–Iran Tensions
Supreme Court Blocks California Transgender Student Privacy Laws in 6-3 Decision
Yann LeCun's AI Startup AMI Raises $1 Billion at $3.5 Billion Valuation
Trump-Putin Call Addresses Iran War, Ukraine Peace, and Global Oil Crisis
U.S. Calls for Reassessment of International Aid to Taliban-Ruled Afghanistan
CDC Acting Director Urges Measles Vaccination as U.S. Cases Surge in 2026
US Approves $151.8M Bomb Sale to Israel Without Congressional Review Amid Iran Conflict
Trump Hints at Possible U.S. Takeover of Cuba Amid Deepening Humanitarian Crisis
U.S. Senate Greenlights AI Chatbots for Official Staff Use
Venezuela Opens Mining Sector to Foreign Investment Under New Law
Lockheed Martin Invests $150M in Alabama Missile Production Facility
U.S.-Israel War on Iran Sends Crude Oil Prices Surging Amid Strait of Hormuz Tensions
OpenAI Pentagon AI Contract Adds Safeguards Amid Anthropic Dispute
Anduril Industries Acquires ExoAnalytic Solutions to Bolster Space Defense Capabilities
Oracle Stock Surges as AI Data Center Boom Drives Revenue Beat and Bullish 2027 Outlook
U.S. Considers New Rules Tying AI Chip Exports to Investment and Security Guarantees
Iran's Government Remains Stable Despite U.S. and Israeli Strikes, Intelligence Shows 



