Executive Summary
On February 27, 2026, Secretary of War Pete Hegseth announced via social media that Anthropic — the domestic AI company whose model Claude is the only artificial intelligence system currently deployed on U.S. military classified networks — had been designated a "supply chain risk."[1] The designation followed Anthropic's refusal to remove contractual restrictions prohibiting the use of its technology for mass domestic surveillance and fully autonomous weapons systems.[2]
Hours later, the Pentagon signed a contract with OpenAI — whose deal contained the same two restrictions Anthropic had demanded.[3]
This research compilation analyzes the legal foundations of a challenge to the designation. It identifies five independent legal theories — each of which may independently establish a basis for judicial relief — and integrates them into the four-factor preliminary injunction framework under Winter v. Natural Resources Defense Council, Inc., 555 U.S. 7 (2008). The analysis draws on over fifty primary sources including Supreme Court opinions, federal statutes, regulatory filings, official statements, and reporting from major news organizations.
The designation bypassed every procedural safeguard that federal law requires for supply chain risk actions — no written determination, no risk assessment, no congressional notification — and was used to punish a domestic company for speech the government disliked while simultaneously rewarding a competitor for accepting identical terms.
I. The Factual Record
The events leading to the designation unfolded rapidly. In June 2024, Anthropic entered a contract with the Pentagon providing Claude for use in classified intelligence and planning systems.[4] By early 2026, the relationship deteriorated after reports surfaced that the Pentagon wanted Claude deployed for surveillance capabilities that Anthropic's contract explicitly prohibited.[5]
On February 24, 2026, Axios reported that the Pentagon had given Anthropic a 72-hour ultimatum to remove its restrictions on mass surveillance and autonomous weapons or face contract termination.[6] Bipartisan alarm followed: Senators Wicker, Reed, McConnell, and Coons wrote to the Pentagon urging resolution, and Senator Tillis called the situation "sophomoric."[7]
On February 27, rather than simply terminating the contract, the government escalated dramatically. Secretary Hegseth posted on X that Anthropic had been designated a "supply chain risk" — a national security designation previously reserved for entities like Kaspersky Lab (linked to Russian intelligence) and Huawei/ZTE (linked to the Chinese military).[8] The designation did not merely end Anthropic's Pentagon contract. It ordered that every military contractor, supplier, and partner sever "any commercial activity" with Anthropic within six months.[9]
That same day, President Trump posted on Truth Social calling Anthropic "Leftwing nut jobs" who made a "DISASTROUS MISTAKE trying to STRONG-ARM the Department of War" and warned of "the Full Power of the Presidency" with "major civil and criminal consequences."[10]
Anthropic responded that same evening, stating it had "not yet received direct communication from the Department of War or the White House" about the designation and vowing to challenge it in court.[11]
The company learned of its national security designation through social media posts — not through any formal notification, written determination, or administrative process.
II. The OpenAI Paradox
The most damaging fact in the government's record is the simultaneous treatment of Anthropic and OpenAI.
On the same day Anthropic was designated a supply chain risk, OpenAI announced a Pentagon contract for its models to be used in classified defense systems. As Fortune reported, the OpenAI deal contained "the same two limitations" that Anthropic demanded — restrictions on mass domestic surveillance and fully autonomous weapons.[3] OpenAI CEO Sam Altman publicly urged the Pentagon to "offer these same terms to all AI companies."[12]
The government's attempt to distinguish the two arrangements rests on contractual form: Anthropic demanded the restrictions as explicit contract terms, while OpenAI reportedly agreed to "any lawful purpose" with the restrictions embedded separately. But as Fortune noted, it remained "unclear exactly how both these things could be true or how the limitations are stated in the agreement."[3]
The government punished Anthropic for demanding restrictions that it simultaneously accepted from a competitor. This differential treatment — identical substantive terms, opposite governmental responses — is the factual foundation of the equal protection claim and evidence of pretextual motivation.
As former Trump AI policy advisor Dean Ball stated, the broad interpretation of the designation is "almost surely illegal" and constitutes "a psychotic power grab." Peter Harrell, former National Security Council director, observed that the Department of War "can't legally tell contractors don't use Anthropic in private contracts."[3]
Read the full government vulnerability analysisIII. Statutory & Administrative Law
The statutory case is the cleanest ground for judicial relief. It requires no novel legal doctrines — only the question of whether the government followed its own statute.
The Statute: 10 U.S.C. § 3252
The supply chain risk authority resides in 10 U.S.C. § 3252, which authorizes "covered procurement actions" when the government determines that a "supply chain risk" exists. The statute defines supply chain risk as the risk that "an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system."[13]
Critically, the statute imposes mandatory procedural requirements before the authority can be exercised. Section 3252(b) requires: (1) consultation with procurement and relevant officials; (2) a written determination that the authority is necessary to protect national security and that less intrusive measures are not reasonably available; and (3) notification to Congress with a summary of the risk assessment, the basis for the determination, and a discussion of less intrusive measures considered.[13]
None of the § 3252(b) procedural requirements appear to have been satisfied. No written determination has been published. No "less intrusive measures" analysis has been disclosed. No congressional notification with a risk assessment has been documented. The designation was announced via social media.
Statutory Scope: The "Adversary" Problem
Beyond procedure, the designation faces a fundamental definitional problem. Section 3252 addresses threats from "adversaries" who seek to "sabotage" or "subvert" military systems. Anthropic is a domestic company engaged in a contractual disagreement about the ethical boundaries of AI deployment — not a foreign adversary seeking to sabotage military infrastructure. The statute was designed for the Kaspersky/Russia and Huawei/China scenarios, not for punishing a domestic company's negotiating position.[14]
The designation exceeds § 3252's statutory scope in two ways: (1) Anthropic is not an "adversary" as defined by the statute; (2) Hegseth's ban on "any commercial activity" exceeds the statute's reach, which is limited to "covered procurement actions" — source exclusion, qualification failures, and subcontract consent decisions within defense contracting.
The Comparative Record
Every prior supply chain risk action followed extensive formal processes. The Kaspersky Final Determination (Case No. ICTS-2021-002) was published in the Federal Register at 89 FR 52434 following what the Bureau of Industry and Security described as "a lengthy and thorough investigation."[15] The Huawei/ZTE designation followed the FCC's formal process under the Secure and Trusted Communications Networks Act, published at 85 FR 230.[16]
The contrast is stark: multi-year investigations, Federal Register publications, mitigation opportunities, and congressional oversight for foreign adversaries — social media posts for a domestic AI company.
APA Judicial Review
Under the Administrative Procedure Act, 5 U.S.C. § 706, a reviewing court "shall ... set aside agency action" found to be "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law," "in excess of statutory jurisdiction, authority, or limitations," or "without observance of procedure required by law."[17] The designation appears vulnerable on all three grounds.
The Major Questions Doctrine may apply. In Learning Resources, Inc. v. Trump (Feb. 20, 2026), the Supreme Court held 7–2 that the executive must "identify clear congressional authorization" for extraordinary claims of authority.[18] Stretching a supply chain risk statute designed for foreign adversaries to punish a domestic company's contractual stance is precisely the kind of extraordinary executive claim the doctrine addresses.
IV. Constitutional Claims
While courts prefer to resolve cases on statutory grounds, the constitutional claims serve two strategic purposes: they strengthen the preliminary injunction calculus and provide fallback theories if the government pivots to a different statutory vehicle.
First Amendment — Retaliation
The Supreme Court in Board of County Commissioners v. Umbehr, 518 U.S. 668 (1996), held that "the First Amendment protects independent contractors from the termination or prevention of automatic renewal of at-will government contracts in retaliation for their exercise of the freedom of speech."[19] Under Perry v. Sindermann, 408 U.S. 593 (1972), the government "may not deny a benefit to a person on a basis that infringes his constitutionally protected interests."[20]
Anthropic's speech — publicly refusing to build mass surveillance and autonomous weapons tools — is core political and ethical expression on a matter of profound public concern. Trump's own language provides direct evidence of retaliatory animus: the characterization of Anthropic as "Leftwing nut jobs" and threats of "the Full Power of the Presidency" constitute ideological punishment, not a national security determination.[10]
A federal court in September 2025 found structurally analogous conduct unlawful when it ruled the Trump administration had "used antisemitism as a smokescreen for a targeted, ideologically-motivated assault" in freezing Harvard's funding.[21] The supply chain risk designation may constitute a national security smokescreen for retaliation against Anthropic's speech.
Equal Protection — Class of One
Under Village of Willowbrook v. Olech, 528 U.S. 562 (2000), the Equal Protection Clause gives rise to a cause of action where a party is "intentionally treated differently from others similarly situated" with "no rational basis for such treatment."[22]
The OpenAI paradox satisfies every element of an Olech claim. Both companies are frontier AI firms with Pentagon contracts for classified systems. Both deals contain the same two restrictions. Yet Anthropic was designated a supply chain risk while OpenAI was rewarded with a contract — hours apart, on the same day. The contractual form distinction (explicit terms versus separately embedded restrictions) is exactly the kind of formalistic pretext that rational-basis review is designed to expose.
Due Process — Procedural
Under Mathews v. Eldridge, 424 U.S. 319 (1975), the adequacy of procedural protections is determined by balancing: (1) the private interest affected, (2) the risk of erroneous deprivation through the procedures used, and (3) the government's interest.[23]
All three factors favor Anthropic. The private interest is extraordinarily high — a $200 million contract loss, a government-wide ban, a mandate that all defense contractors sever ties, and damage to a $30 billion funding round. The risk of error is maximized by the near-total absence of process. And the government's own six-month phase-out timeline demonstrates that immediate action was not required — undermining any claim that urgency justified bypassing due process.
Separation of Powers
Under Justice Jackson's concurrence in Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579 (1952), presidential power is "at its lowest ebb" when the executive acts contrary to the expressed or implied will of Congress.[24] Bipartisan congressional alarm — including letters from the chairs and ranking members of the Senate Armed Services Committee — may constitute the "implied will of Congress" against this action.
Read the full constitutional analysisV. Preliminary Injunction Framework
Under Winter v. NRDC, 555 U.S. 7 (2008), a preliminary injunction requires: (1) likelihood of success on the merits; (2) likelihood of irreparable harm absent relief; (3) balance of equities favoring the movant; and (4) alignment with the public interest.[25]
Factor 1: Likelihood of Success
Five independent legal theories support likely success: (A) APA procedural deficiency — the government did not follow § 3252's mandatory procedures; (B) APA ultra vires — the designation exceeds § 3252's statutory scope; (C) Equal Protection class-of-one — identical treatment of Anthropic and OpenAI with opposite governmental responses; (D) First Amendment retaliation — direct evidence of ideological punishment; and (E) Due Process — no notice, no hearing, no opportunity to respond. Courts need only find likelihood of success on one.
The procedural APA theory is virtually airtight: the government either followed the statute's procedures or it did not, and the documented record indicates it did not. Federal courts have recently granted preliminary injunctions on analogous grounds — blocking DOGE access to Social Security Administration systems and OPM records where required procedures were not followed.[26]
Factor 2: Irreparable Harm
The loss of First Amendment rights, even for minimal periods, constitutes per se irreparable injury under established precedent. The economic harm — hundreds of millions in direct losses plus broader commercial devastation — is not compensable through money damages because the government enjoys sovereign immunity. The harm compounds daily as contractors sever ties during the six-month phase-out.
Factor 3: Balance of Equities
The government's own six-month transition timeline defeats its urgency claim. If the designation were a genuine emergency, the government would not have allowed six months of continued use. A preliminary injunction maintaining the status quo during litigation would preserve the identical arrangement the Pentagon relied on since June 2024 — costing the government nothing.
Factor 4: Public Interest
The public interest favors protecting First Amendment rights, the rule of law, and preventing the chilling effect of a precedent in which the government can weaponize supply chain risk designations against domestic companies for ideological reasons. The designation signals that any company negotiating contract terms with the government risks being labeled a national security threat — a consequence that extends far beyond Anthropic to the entire defense industrial base.
Senator Markey called the designation "reckless and unprecedented."[27] Senator Slotkin observed that "Congress has not put clear limits around AI's use in the military" — implying these are legislative questions, not ones for the Pentagon to resolve through supply chain risk designations.[28]
Read the full PI framework analysisVI. Anticipated Government Defenses
National Security Deference
Courts do grant deference to genuine national security determinations. But that deference does not extend to claims that bypass required statutory procedures, designations that exceed statutory authority, or actions motivated by retaliatory animus rather than legitimate security concerns. The Supreme Court in TikTok Inc. v. Garland (2025) applied intermediate scrutiny even to a Congressionally-authorized national security action involving a foreign adversary with documented state ties[29] — far more process than occurred here.
Political Question Doctrine
The designation raises legal questions — statutory compliance, constitutional rights — that are quintessentially judicial. The government's decision is reviewable under the APA; only purely military operational decisions enjoy political question insulation.
The Internal Contradiction
The government faces a fundamental logical problem. If Anthropic's technology is a genuine supply chain risk — comparable to software from a Russian intelligence-linked company — why allow six months of continued military use? And if the government's actual position is that Claude is so valuable it must compel access via the Defense Production Act, how can it simultaneously be a security threat requiring exclusion?[30] These positions are mutually exclusive, and the contradiction undermines any national security justification.
VII. Conclusion
The February 27, 2026 supply chain risk designation of Anthropic presents an exceptionally strong case for preliminary injunctive relief across all four Winter factors. The statutory grounds alone — procedural deficiencies under § 3252 and the APA — may suffice for relief without reaching constitutional questions. The constitutional claims provide independent and reinforcing theories. The OpenAI paradox supplies devastating factual ammunition across multiple theories simultaneously.
The case is constructed so that no single theory is necessary for relief. The statutory claims suffice without the constitutional claims. The equal protection claim stands independent of the First Amendment claim. Each theory reinforces the others while remaining independently sufficient — hedging against the possibility that the government may abandon its current statutory vehicle and pivot to new legal theories.
The recommended litigation sequence is: (1) emergency TRO grounded in the procedural APA theory plus irreparable harm; (2) expedited PI briefing incorporating all five theories; (3) full merits litigation if preliminary relief is granted. The government's six-month phase-out timeline provides a natural window for judicial intervention.
Sources
- Pete Hegseth, post on X, Feb. 27, 2026; CBS News (Walsh), "Hegseth declares Anthropic a supply chain risk," Feb. 28, 2026. ↩
- Anthropic, "Statement on the comments from Secretary of War Pete Hegseth," Feb. 27, 2026. ↩
- Jeremy Kahn, "OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a 'supply chain risk,'" Fortune, Feb. 28, 2026. ↩
- Ina Fried & Andrew Lawler, "Pentagon blacklists Anthropic, labels AI company 'supply chain risk,'" Axios, Feb. 27, 2026. ↩
- CNBC (Novet), "Anthropic faces lose-lose scenario in Pentagon conflict," Feb. 27, 2026. ↩
- Andrew Lawler & Ashley Curi, "Trump blacklists Anthropic: Here's what being a 'supply chain risk' means," Axios, Feb. 27, 2026. ↩
- Axios, "Scoop: Top Senate defense leaders intervene in Pentagon-Anthropic AI dispute," Feb. 27, 2026; Axios, "Congress rips Pentagon over 'sophomoric' Anthropic fight," Feb. 26, 2026. ↩
- BIS, "Commerce Department Prohibits Russian Kaspersky Software," June 2024; DWT, "Huawei and ZTE Designated as Threats to National Security," Aug. 2020. ↩
- ABC News (Wang), "Trump orders US government to cut ties with Anthropic," Feb. 27, 2026. ↩
- NPR (Bond & Brumfiel), "OpenAI announces Pentagon deal after Trump bans Anthropic," Feb. 27, 2026. ↩
- Anthropic, "Statement on the comments from Secretary of War Pete Hegseth," Feb. 27, 2026. ↩
- ABC News (Wang), "Trump orders US government to cut ties with Anthropic," Feb. 27, 2026. ↩
- 10 U.S.C. § 3252, "Requirements for information relating to supply chain risk." ↩
- Anthropic, "Statement," Feb. 27, 2026 (arguing § 3252 "was enacted to address threats from foreign adversaries"); Fortune (Kahn), Feb. 28, 2026 (Bullock: no risk assessment or congressional notification). ↩
- Federal Register, "Final Determination, Case No. ICTS-2021-002 — Kaspersky Lab," 89 FR 52434 (June 24, 2024). ↩
- Federal Register, "Protecting Against National Security Threats to the Communications Supply Chain," 85 FR 230 (Jan. 3, 2020); DWT, "Huawei and ZTE Designated," Aug. 2020. ↩
- 5 U.S.C. § 706, "Scope of review." ↩
- Learning Resources, Inc. v. Trump (Feb. 20, 2026); Tax Foundation, "Supreme Court Strikes Down President Trump's Tariffs." ↩
- Bd. of County Comm'rs v. Umbehr, 518 U.S. 668 (1996). ↩
- Perry v. Sindermann, 408 U.S. 593 (1972). ↩
- NPR (Carrillo), "Trump administration illegally froze billions in Harvard funds, judge rules," Sept. 3, 2025. ↩
- Village of Willowbrook v. Olech, 528 U.S. 562 (2000). ↩
- Mathews v. Eldridge, 424 U.S. 319 (1975). ↩
- Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579 (1952) (Jackson, J., concurring). ↩
- Winter v. NRDC, 555 U.S. 7 (2008). ↩
- Democracy Forward, "DOGE's Data Dive Denied: Court Grants Preliminary Injunction," 2025; FedScoop, "Federal judge grants preliminary injunction in challenge to DOGE record access at OPM," 2025. ↩
- Sen. Markey, "Markey Demands Immediate Congressional Action," Feb. 27, 2026. ↩
- Axios, "Congress rips Pentagon over 'sophomoric' Anthropic fight," Feb. 26, 2026. ↩
- TikTok Inc. v. Garland, No. 24-656 (2025). ↩
- Rozenshtein, "What the Defense Production Act Can and Can't Do to Anthropic," Lawfare, Feb. 25, 2026. ↩
This document is an independent legal research compilation. It does not constitute legal advice and is not a substitute for consultation with qualified legal counsel. All factual claims are sourced to publicly available primary documents, official statements, court filings, and reporting by major news organizations. Full citations provided above.