Anthropic supply-chain risk: DoD designation triggers removals and lawsuits

Pentagon headquarters — Anthropic supply-chain risk prompting Claude removals from sensitive systems

Anthropic supply-chain risk: DoD designation triggers removals and lawsuits

By Agustin Giovagnoli / March 17, 2026

The Pentagon’s designation of Anthropic as an Anthropic supply-chain risk signals an aggressive shift in how the U.S. government governs military AI through procurement. The move has triggered near-term removals from sensitive systems, a six-month wind‑down for many federal and contractor deployments, and a legal fight that could shape vendor guardrails across defense programs [5][6][3][2].

Summary: Anthropic supply-chain risk and immediate fallout

The Defense Department designated Anthropic a supply‑chain risk following the collapse of contract talks, initiating off‑ramping across key military and many civilian systems and advising a full transition off Claude for federal‑facing work within six months [5][6]. An internal Pentagon memo instructed commanders to remove Anthropic AI from sensitive and classified systems, where it had reportedly been the only deployed large model on some classified networks [3]. Anthropic has responded with lawsuits challenging the designation [2]. For broader background on the agency’s posture, see the Department of Defense (external).

How the conflict began: contract language and the Hegseth memo

The dispute started when the Pentagon sought to amend an existing contract to allow “any lawful use” of Claude. Anthropic resisted, maintaining high‑level prohibitions on applications such as fully autonomous weapons and mass domestic surveillance [1]. In January, Defense Secretary Pete Hegseth issued a memo directing that all Pentagon AI contracts include broad “any lawful use” language and that preferred models lack vendor usage constraints or technical guardrails that would block otherwise lawful military operations [1]. OpenAI later signed a separate defense deal more aligned with this policy, drawing a clear contrast with Anthropic’s stance [1].

What the supply‑chain designation means in practice

The Anthropic supply-chain risk designation is typically reserved for foreign or compromised vendors, and here it has been used to force rapid off‑ramping of Claude across key military and many civilian systems [5]. Commanders have been ordered to remove Claude from sensitive and classified systems [3]. Government contractors have been advised to complete transition off Claude for any federal‑facing work within a six‑month wind‑down period [6].

Near-term steps cited in legal advisories include inventorying deployments, freezing new integrations, and preparing substitute models to avoid service disruptions tied to the DoD Anthropic ban [6]. The operational reality is a staggered removal, starting with sensitive environments and moving through contractor-managed systems on federal programs [3][6].

Legal fight: Anthropic’s lawsuit and government defenses

Anthropic has sued the Defense Department, alleging unlawful retaliation for refusing to relax safeguards and arguing that supply‑chain authorities were misused in this case [2]. The complaint challenges the legal basis for treating the company as an Anthropic supply-chain risk, a label that triggers broad exclusion from defense systems and government work [2][6].

Policy analysis warns that relying on procurement rules to set operational boundaries for AI can weaken oversight. A Lawfare review argues that “military AI policy by contract” lacks democratic checks and risks marginalizing vendor safety guardrails [4].

Broader implications for AI governance and procurement

The Hegseth memo’s “any lawful use” standard reflects a military AI procurement policy that favors operational flexibility over vendor-imposed limits [1]. Think tank analysis calls for Congress to step in, highlighting the DoD’s conflict with Anthropic and the OpenAI defense deal comparison as a signal that legislative guardrails may be needed [1]. Lawfare’s critique adds that contract-centric governance can sideline transparent rulemaking and erode accountability [4]. For vendors, the Anthropic supply-chain risk episode is a warning that safety restrictions can collide with contracting preferences and trigger systemic exclusion [5][6].

Practical next steps for contractors and tech leaders

  • Inventory where Claude is embedded across programs, including data flows and access scopes [6].
  • Freeze new deployments and begin controlled replacement with compliant alternatives to avoid mission impact from Claude removal classified systems orders [3][6].
  • Update subcontractor terms, SLAs, and security plans to reflect the wind‑down and substitute models [6].
  • Map any export-control or data residency implications during migration [6].
  • Engage counsel to interpret the designation’s scope and adjust bid strategies and teaming agreements [6].

For ongoing monitoring and tooling ideas, explore our AI news coverage.

What to watch next

Key signals include milestones in the Anthropic lawsuit DoD case [2], any updates to procurement guidance following the Hegseth memo any lawful use directive [1], further clarity on contractor timelines [6], and whether additional vendors face an Anthropic supply-chain risk designation [5]. Watch vendor negotiations and the OpenAI defense deal comparison for how model guardrails are treated across solicitations [1].

Appendix: timeline and source links

  • DoD pushes to amend Anthropic contract to allow “any lawful use”; Anthropic maintains prohibitions on autonomous weapons and mass domestic surveillance [1].
  • Hegseth memo directs inclusion of “any lawful use” in AI contracts and favors models without vendor guardrails [1].
  • OpenAI signs a defense deal more aligned with DoD’s policy [1].
  • DoD designates Anthropic a supply‑chain risk; off‑ramping begins across military and many civilian systems [5].
  • Internal memo orders removal of Claude from sensitive and classified systems [3].
  • Government contractors advised to transition off Claude within six months [6].
  • Anthropic sues over the designation, alleging retaliatory misuse of authority [2].

Sources

[1] The Department of Defense’s Conflict With Anthropic and Deal With OpenAI Are a Call for Congress to Act
https://www.americanprogress.org/article/the-department-of-defenses-conflict-with-anthropic-and-deal-with-openai-are-a-call-for-congress-to-act/

[2] Anthropic Sues Department of Defense Over Supply-Chain-Risk Designation
https://www.wired.com/story/anthropic-sues-department-of-defense-over-supply-chain-risk-designation/

[3] Internal Pentagon memo orders military commanders to remove Anthropic AI from key systems
https://www.cbsnews.com/news/pentagon-ai-anthropic-memo-remove-from-key-systems/

[4] Military AI Policy by Contract: The Limits of Procurement as Governance
https://www.lawfaremedia.org/article/military-ai-policy-by-contract–the-limits-of-procurement-as-governance

[5] Pentagon declares AI company who sought to restrict military usage supply-chain risk
https://www.syracuse.com/politics/2026/03/pentagon-declares-ai-company-who-sought-to-restrict-military-usage-supply-chain-risk.html

[6] U.S. Government Bans Use of Anthropic Products: What This Means for Government Contractors and AI Strategy
https://www.taftlaw.com/news-events/law-bulletins/us-government-bans-use-of-anthropic-products-what-this-means-for-government-contractors-and-ai-strategy/

Scroll to Top