Menu

Search

  |   Business

Menu

  |   Business

Search

OpenAI Pentagon AI Contract Adds Safeguards Amid Anthropic Dispute

OpenAI Pentagon AI Contract Adds Safeguards Amid Anthropic Dispute. Source: Focal Foto, CC BY-SA 4.0, via Wikimedia Commons

OpenAI has confirmed that its newly signed contract with the U.S. Department of Defense (DoD) includes expanded safeguards designed to tightly regulate how artificial intelligence is deployed on classified government networks. The announcement came shortly after President Donald Trump directed federal agencies to halt work with rival AI startup Anthropic, with the Pentagon signaling it may label the company a “supply-chain risk.” Anthropic has stated it will challenge any such designation in court.

The OpenAI Pentagon agreement, finalized one day after Anthropic’s setback, reflects growing scrutiny over AI use in national security. OpenAI emphasized that its defense contract contains stricter guardrails than previous classified AI deployments. According to the company, three firm “red lines” are embedded in the deal: its technology cannot be used for mass domestic surveillance, to operate autonomous weapons systems, or to make high-stakes automated decisions without human oversight.

To enforce these AI safety policies, OpenAI said it maintains full control over its proprietary safety stack and deploys its systems through secure cloud infrastructure. Cleared OpenAI personnel remain actively involved in oversight, and contractual provisions provide additional layers of protection. The company also warned that any violation of the agreement by the U.S. government could result in termination of the contract, although it does not anticipate such a scenario.

The Pentagon has recently awarded AI contracts worth up to $200 million each to major technology firms, including OpenAI, Anthropic, and Google, as it seeks to strengthen defense capabilities with advanced artificial intelligence tools. At the same time, defense officials aim to retain operational flexibility rather than be restricted by developers’ caution over unreliable AI in weapons systems.

Despite the competitive tensions, OpenAI said Anthropic should not be classified as a supply-chain risk, noting it has communicated its position directly to government officials.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.