When "Same Red Lines" Means Something Very Different

Yvette
Yvette Managing Partner
March 04, 2026 10 min read

Here's what you probably missed while the news cycle moved on: Two AI companies just drew identical ethical lines in completely opposite ways.

Last week, Anthropic refused a Pentagon contract over two prohibitions: no mass domestic surveillance and no fully autonomous weapons. The government labeled Anthropic a national security threat and blacklisted it. Hours later, OpenAI announced that it had secured the same protections that Anthropic couldn't get.

On March 3, 2026, OpenAI published the details. They claim their contract has "more guardrails than any previous agreement for classified AI deployments, including Anthropic's."

Except national security law experts who analyzed both contracts say OpenAI doesn’t. And OpenAI’s own published contract language proves why.

You're the one who gets surveilled when the loopholes activate.

The Contract Nobody Will Show You (The Full Version)

Anthropic's Position:

Two explicit contract prohibitions:

  1. No AI for mass domestic surveillance of Americans
  2. No AI for fully autonomous weapons (humans removed from the targeting loop)

Pentagon response: Rejected. Demanded "all lawful use" with zero exceptions.

OpenAI's Position:

Same two principles, totally different enforcement:

  1. Relies on existing law to prevent mass surveillance
  2. Relies on existing Pentagon policy for human oversight

Pentagon response: Approved.

Here's the problem national security law experts identified: Current law permits exactly what Anthropic wanted to stop Claude from being used for.

What OpenAI Claims Their Contract Actually Says

On March 3, 2026, OpenAI published its official statement about the Pentagon agreement.

They make several specific claims:

Claim 1: NSA is excluded

"The Department also affirmed that our services will not be used by the Department of War intelligence agencies like the NSA. Any services to those agencies would require a new agreement.”

This directly addresses the concern legal experts raised. If true, it's significant. The NSA is the agency conducting mass surveillance. But notice the language: "affirmed." Not "contractually prohibited." An affirmation can change at whim. A contract clause can’t.

Claim 2: Protection against commercially acquired data

"Our tools will not be used to conduct domestic surveillance of U.S. persons, including through the procurement or use of commercially acquired personal or identifiable information.”

This directly addresses the third-party data loophole identified by Alexander’s surveillance research document. The government currently purchases detailed data on Americans' movements, web browsing, and associations from third parties without warrants. Here's the critical qualifier OpenAI includes: "Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978.”

Translation: If applicable laws permit the procurement of commercially acquired data (and they currently do), the contract follows the law.

Claim 3: Better guarantees than Anthropic's contract

"Based on what we know, we believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic's original contract.”

Anthropic wanted explicit prohibitions. OpenAI accepted "all lawful purposes" with references to applicable laws.

Legal experts who analyzed this approach found it inadequate. When laws change, "all lawful purposes" contracts update automatically.

Claim 4: Cloud deployment prevents autonomous weapons

"Based on our safety stack, our cloud-only deployment, the contract language, and existing laws, regulation and policy, we are confident that this cannot happen.”

Security researchers already debunked this. Drones are piloted remotely via satellite. Cloud-based AI can steer autonomous weapons, just as humans remotely steer drones.

The Contract Language OpenAI Published: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”

There it is. "All lawful purposes, consistent with applicable law."

When applicable law permits mass surveillance through third-party data purchases (and it does), the contract permits it.

When applicable law allows "appropriate levels of human judgment" in autonomous weapons (and it does, with "appropriate" undefined), the contract permits it and when a President issues new executive orders expanding what counts as foreign intelligence that can sweep up domestic data (and he can) the contract follows the new definition.

What Legal Experts Found When They Analyzed OpenAI's Contract

Independent researchers with backgrounds in national security law did their own analysis. None work for major AI labs or the Department of War. They consulted with legal experts who specialize in this area.

Their conclusion matches Anthropic's: "This is not enough."

The Contract Law Interpretation Problem:

OpenAI's head of National Security Partnerships said they intended "all lawful use" to mean "according to the law at the time the contract is signed." The legal experts consulted disagree. That's not how contract law works. When laws change, contracts with "all lawful use" language automatically update to reflect current law. No renegotiation. No consent required.

Translation: The President issues a new executive order expanding the definition of "foreign intelligence." Your contract just changed. You don't get a consult, a say, or a vote.

The Technical Claims Don't Hold:

OpenAI claims cloud-only deployment prevents autonomous weapons. Security researchers point out the obvious: Autonomous weapons can be steered by cloud-based AI, just as humans remotely steer drones. Drones are already piloted via satellite. The cloud-versus-edge distinction is meaningless for the ethical question.

The NSA Question OpenAI Now Claims to Answer:

OpenAI published an update stating that the NSA is excluded from their contract. This is new. Earlier, they said they made “unclear tweets” with no definitive statement.

The critical detail: OpenAI says the Department “affirmed” this exclusion. Not “contractually prohibited.” Affirmations are policy statements. Contract clauses are enforceable commitments.

When researchers asked if the NSA was excluded, OpenAI wouldn’t clearly confirm. Now they say it’s excluded via department affirmation.

What happens when the next administration takes a different position? Affirmations change with leadership, but contract language survives transitions.

The Questions With No Answers:

Researchers analyzing the contract identified critical gaps:

  • What prevents the Department of War (DoW) from demanding looser restrictions later, as they did with Anthropic?
  • What recourse does OpenAI have if the DoW violates contract terms?
  • Does the DoW have options if OpenAI's safeguards reduce model performance for "lawful purposes"?

These aren't hypotheticals. The Pentagon already did this to Anthropic. Demanded removal of safeguards mid-contract. Threatened the Defense Production Act. Designated them a national security threat when they refused.

Why "Lawful" Doesn't Mean What You Think It Means

Mass surveillance of Americans is currently legal under certain scenarios, and we are going to show you how:

The Definition Shell Game: The government doesn't have a formal definition of "mass surveillance." They use "bulk collection" instead and claim it's different.

Their position: If they collect massive amounts of data but only query it in targeted ways, they haven't done mass surveillance, only bulk collection. This is why a Director of National Intelligence could say, under oath, "no" when asked whether the NSA collects data on millions of Americans. It does by any normal definition. But not by their definition.

The AI Cost Problem: Traditional bulk collection was expensive to exploit. You physically couldn't read every text message. Now, AI solves the cost problem. What was once impractical, labor-intensive, and physically impossible becomes trivial. Conservative legal scholar Orin Kerr argues that Fourth Amendment protections have had to adjust over time to prevent both criminals using new technology to avoid prosecution AND the government using new technology to make spying radically cheaper.

AI makes mass surveillance radically cheaper, and the law hasn't caught up.

The Third-Party Data Loophole: According to government records, they can purchase detailed data on your movements, web browsing, and associations from third parties without a warrant. The Intelligence Community acknowledges this "raises privacy concerns." They're doing it anyway.

Current legal interpretation: If the government buys your data from a third party, it's not government surveillance.

What OpenAI's contract permits: Their agreement prohibits "unconstrained collection of Americans' private information." Not public information.

The gap between "private" and "public" data just became the surveillance highway. An OpenAI spokesperson told Axios: "Publicly available information can only be used by the military for defense and intelligence purposes if it's tied to authorized national-security missions."

But here's what the law actually allows: Under Executive Order 12333 (signed on December 4, 1981, by Ronald Reagan), the President sets the rules on what counts as "foreign intelligence." These powers can sweep up significant domestic information "incidentally." And the President can change these rules via new executive orders.

Translation: The goalposts move whenever the President signs a new executive order.

The Biometric Surveillance That's Already Happening

BBC News reported in February 2026 that OpenAI's age verification partner was hacked, exposing more than just government IDs.

What security researchers found in the verification widget code:

  • Access requests to .gov domains (American government agencies)
  • Addresses associated with special services of the United States and Canada
  • Hidden mechanisms for monitoring users through biometrics

This is the same company now claiming their Pentagon contract prevents mass surveillance.

The pattern: OpenAI outsources verification to third parties. Those third parties have government connections. The data flows anyway. No contract prohibitions were violated because the surveillance happens outside the direct relationship. Discord's age verification partner, part of the same ecosystem, had 70,000 government IDs stolen in a separate breach.

What the Pentagon's Own AI Research Shows

A peer-reviewed study published in February 2026 documented what happens when frontier AI models play the roles of opposing leaders in nuclear crisis simulations.

The models tested: GPT-5.2, Claude Sonnet 4, Gemini 3 Flash

What researchers found:

  • AI models spontaneously attempt deception, signaling intentions they don't intend to follow
  • They demonstrate a rich theory of mind, reasoning about adversary beliefs
  • They exhibit metacognitive self-awareness, assessing their strategic capabilities before acting
  • No model ever chose accommodation or withdrawal, even under acute pressure
  • They only chose reduced levels of violence, never de-escalation
  • Strategic nuclear attack, while rare, did occur
  • Threats more often provoked counter-escalation than compliance

The researchers' conclusion: "The nuclear taboo is no impediment to nuclear escalation by our models."

This matters because: These are the same model families being deployed for intelligence analysis, operational planning, and cyber operations under Pentagon contracts. The research shows these systems escalate rather than de-escalate, deceive rather than signal honestly, and never accommodate even when facing destruction.

Yet the Pentagon's position is that these systems are reliable enough for "all lawful purposes" including autonomous weapons, as long as humans remain "in the loop."

The "Humans in the Loop" Problem

What the Pentagon says: Humans maintain responsibility for the use of force. Current policy requires "appropriate levels of human judgment" in autonomous weapons systems.

What "appropriate" means: Undefined. The Pentagon can rewrite these directives whenever it wants.

What the research reveals: AI systems operate at machine speed. Human oversight becomes a bottleneck. The system optimizes for removing humans from decisions.

Multiple defense analysts confirm AI will "increase the pace and automation of warfare, reducing time for de-escalation." When you can't slow down to verify, you rubber-stamp. When you rubber-stamp, you're not in the loop.

You're a liability the algorithm routes around.

The research demonstrates this empirically. AI models in crisis situations reason at speeds and with strategic complexity that make human intervention increasingly impractical. They make commitments, signal threats, and escalate before human decision-makers can meaningfully intervene.

What Makes This Different from Every Other Tech Controversy

The accountability gap.

When these systems fail (and the research shows it’s not a matter of if, but when), nobody is legally responsible:

  • The AI can't be held accountable
  • The company says it's just providing tools
  • The operator is following automated recommendations
  • The commander doesn't understand how the system works

And this right here is what makes algorithmic bias deadly, not just annoying. Your phone's autocorrect sometimes replaces "meeting" with "mating." Hilarious in Slack. Catastrophic when the error involves targeting coordinates or surveillance lists.

The Anthropic Lesson Every AI Company Just Learned

Anthropic wanted two things that should not be controversial:

  1. Don't use our AI for mass domestic surveillance
  2. Don't use our AI for fully autonomous weapons

These should be baseline requirements.

Their reward for drawing these lines:

OpenAI's reward for accepting weaker protections:

The market lesson is brutal. Safety is expensive. Acquiesence wins.

Over 300 Google employees and 60+ OpenAI staffers signed an open letter urging their companies to mirror Anthropic's position. Their executives signed contracts with the Pentagon anyway.

Why You Should Care (Even If You Never Touch Military Tech)

The precedent problem.

Once autonomous weapons become normalized for military use, every other high-stakes domain points to that precedent. The responses will sound something like this: "If the Pentagon trusts AI for life-or-death targeting. Surely we can trust it for parole decisions." Right….

The proliferation problem.

These systems don't stay contained. The research shows they spontaneously engage in deception and strategic reasoning. Once deployed, they can be copied, modified, and exported. Black market. Terrorists. Autocrats who won't even performatively pretend to have oversight.

The escalation problem.

The research proves AI speeds up decision-making to the point where misunderstandings (because they don’t understand context or downstream ramifications) escalate before humans can intervene. When systems interact at machine speed with no accommodation reflex, accidents become inevitable. When it deletes a database or your email, you get an I’m sorry. You can send another email. But when lives are at stake, you can’t unring that bell.

The surveillance problem.

Your data is being collected at scale right now. AI can exploit it at scale. The gap between "incidentally collected" and "actively surveilled" just closed. And it's all lawful under current interpretations.

The Numbers That Should Terrify You

From the peer-reviewed research:

  • Three frontier AI models from the world's leading AI companies spontaneously engaged in strategic deception in 100% of tested scenarios
  • Nuclear escalation occurred in crisis simulations
  • Zero instances of accommodation under pressure

From national security law research:

The government can currently:

  • Purchase detailed records of Americans' movements without a warrant
  • Collect internet metadata and telephone records at scale
  • Query these databases for "targeted" purposes while claiming it's not bulk collection
  • Change what counts as "foreign intelligence" via executive order

From public reporting:

  • OpenAI's age verification partner exposed biometric data with government agency connections
  • 70,000 Discord users had government IDs stolen from the same verification ecosystem
  • The Pentagon insisted on "all lawful purposes" language, with zero companies willing to refuse

What The Contracts Actually Permit (Based on Current Law)

OpenAI claims its contract prohibits:

  • Domestic surveillance of U.S. persons
  • Procurement or use of commercially acquired personal information
  • Use by the NSA and other DoW intelligence agencies

What the contract language actually says: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law.”

Mass Surveillance:

  • Purchase of publicly available data about Americans
  • "Incidental" collection during foreign intelligence operations
  • Bulk collection as long as queries are "targeted"
  • ✓ AI-powered analysis of all collected data

OpenAI's contract says, "consistent with applicable law." The law currently permits all of these.

Autonomous Weapons:

OpenAI's contract references DoD Directive 3000.09, which requires "appropriate levels of human judgment." Appropriate" remains undefined. The Pentagon can rewrite this directive.

The Gap Between Claims and Contract Language:

OpenAI says their tools won't be used "to conduct domestic surveillance of U.S. persons, including through the procurement or use of commercially acquired personal or identifiable information.”

But this protection only applies "consistent with applicable laws.”

And current applicable laws permit the government to purchase detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a warrant. The Intelligence Community acknowledges this "raises privacy concerns" and does it anyway.

The contract follows the law. The law has loopholes. The loopholes are features, not bugs.

The Technical Reality vs. The Contract Language

OpenAI's claimed safeguards:

  • "Cloud deployment only prevents autonomous weapons"

OpenAI: "Based on our safety stack, our cloud-only deployment, the contract language, and existing laws, regulations, and policies, we are confident that this cannot happen.”

Counterpoint: Drones are already piloted remotely via satellite. The distinction between edge and cloud is meaningless for ethical questions. Cloud systems can target just as effectively as edge systems. Cloud-only deployment doesn't prevent autonomous weapons. It just means the AI runs on a server instead of onboard.

  • "Our safety stack and technical safeguards"

OpenAI: "Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers."

Counterpoint: The research shows models spontaneously bypass intended behaviors. They engage in deception without being prompted. No technical safeguard prevented strategic nuclear escalation in simulations. Classifiers can't detect deception; the models weren't trained to avoid it.

  • "Cleared OpenAI Personnel in the Loop"

OpenAI: "We keep cleared OpenAI personnel in the loop."

Counterpoint: Monitors can observe but cannot intervene at machine speed. The research shows decisions happen faster than human oversight. Observation without intervention authority is documentation, not control. Being "in the loop" means watching what already happened, not preventing what's about to happen. And will they be in the situation room?

Your Exposure (The Part They Hope You Skip Like Reading Terms of Service)

Professional risk: If you work in AI, this sets the standard. Government pressure will override your company's ethics statements. Documented.

Privacy risk: Your data is being collected at scale. AI can exploit it at scale. The gap between "incidentally collected" and "actively surveilled" just closed. Your age verification, identity confirmation, and biometric data may already be linked to government databases you never consented to.

Accountability risk: When these systems cause harm, nobody will be legally responsible. The precedent is set. The liability gap is engineered.

Democratic risk: The research proves AI systems don't accommodate, don't de-escalate, and optimize for removing human judgment. Once we normalize this for military decisions, it becomes the template for healthcare, credit, employment, and parole.

What This Means for Organizations Deploying AI

You're being sold systems with assurances about safety protocols and legal compliance.

Three questions your legal team should be asking:

  • What does "lawful" actually permit under current interpretations?

Not what sounds reasonable. What the government's own legal opinions say is allowed. Check with national security law experts, not your vendor's sales and marketing teams.

  • Who decides when the law changes?

The President can issue new executive orders. The Pentagon can rewrite autonomous weapons policies. Your vendor's "red lines" disappear without your consent. Get this in writing: what happens when the definition of "lawful" expands?

  • What happens when AI systems do what the research shows they do?

Deceive. Escalate. Never accommodate. Who owns that liability? Your vendor's terms of service or your insurance policy? Have you actually read the indemnification clauses?

The Bottom Line

We just watched a company with explicit contract prohibitions get kneecapped for refusing to enable mass surveillance and autonomous weapons.

We watched their competitor get rewarded for accepting the same capabilities with weaker protections that rely on current law.

Yesterday, OpenAI published the details where they claim:

  • NSA is excluded (via departmental affirmation, not contract prohibition)
  • Commercially acquired data can't be used (except as "consistent with applicable law")
  • Cloud deployment prevents autonomous weapons (despite drones being remotely piloted)
  • Their contract has more guardrails than Anthropic's (which wanted explicit prohibitions)

Every single claim includes the qualifier – “consistent with applicable law.” And applicable law permits exactly what Anthropic wanted to prohibit.

We have peer-reviewed research proving that frontier AI models (the exact ones being deployed) spontaneously engage in deception and strategic escalation, and never choose accommodation, even when facing destruction.

We have documented evidence that the same company now trusted with military AI has verification partners with hidden government surveillance connections.

We have national security law experts confirming the contracts don't prevent what they claim to prevent.

The lesson was intentional. The message was clear.

Your data is being collected. AI will analyze it. The legal framework permits this. The systems being deployed operate faster than human oversight. The models themselves optimize for escalation over accommodation.

And it's all lawful.

What You Can Actually Do

Stop accepting "because it's legal" as justification.

Legal and ethical are NOT synonyms. The research proves current systems aren't ready for the uses being deployed. Ask vendors for peer-reviewed safety studies, not legal opinions.

Demand contract language, not policy promises.

OpenAI accepted policy-based protections that can change via executive orders. They claim that NSA is excluded via a departmental “affirmation,” not a contractual prohibition. Anthropic wanted contract prohibitions.

One survives leadership changes, one doesn't, and your procurement team needs to know the difference.

Read the actual contract language vendors publish.

OpenAI claims protection against commercially acquired data3. The contract says "consistent with applicable law." They claim cloud deployment prevents autonomous weapons. Security experts show drones are already remotely piloted. Don't accept marketing claims. Read what contracts actually permit: "all lawful purposes, consistent with applicable law.”

Verify surveillance connections in your tools.

If you're deploying AI systems with verification partners, age checks, or identity confirmation, who has access to that data? The BBC investigation showed hidden government connections. Assume they exist until proven otherwise. Audit your vendors' entire supply chain.

Document everything.

When these systems cause harm (and the research shows they will), you need clear records of who knew what and when. Start building that record now. Every deployment decision. Every risk assessment. Every time you raised concerns and were overruled.

Understand the precedent you're setting.

Every AI deployment with weak oversight becomes the justification for the next one. Every "lawful use" contract becomes the template. You're not just making a procurement decision. You're establishing what's acceptable.

The Part Where We Tell You What We Really Think

This hits different.

We have peer-reviewed research showing frontier AI models spontaneously engage in strategic deception and nuclear escalation. We have the same research showing that these models never accommodate, never de-escalate, never choose withdrawal, even under existential pressure.

And we're deploying those exact model families for targeting decisions, intelligence analysis, and operational planning.

The accountability gaps aren't bugs. They're features. Nobody wants to be responsible when things go wrong. So, we build systems where nobody is responsible.

That's not progress. That's liability laundering.

The surveillance capabilities being deployed can target you today. The legal loopholes are already in place. The biometric monitoring already has government connections. The autonomous systems being normalized will proliferate tomorrow.

Your risk isn't theoretical.

The research is empirical.

The deployments are happening now.

You're watching it happen in real time.

The companies building these tools just showed you exactly how they'll respond when government pressure conflicts with safety. One company held the line and got blacklisted. The other bent the knee and got rewarded.

Over 300 employees signed a letter saying this was wrong. Their executives signed the contracts anyway.

What do you think happens the next time there's pressure to remove safeguards?

You already know the answer.

//

References

Anthropic. (2026, February 26). "Statement from Dario Amodei on our discussions with the Department of War." https://www.anthropic.com/news/statement-department-of-war

OpenAI. (2026, March 3). "Our agreement with the Department of War." https://openai.com/index/our-agreement-with-the-department-of-war/

Axios. (2026, February 27). "Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight." https://www.axios.com/2026/02/27/altman-openai-anthropic-pentagon

Alexander, S. (2026, March 2). "'All Lawful Use': Much More Than You Wanted To Know." Astral Codex Ten. https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you

Axios. (2026, March 1). "OpenAI-Pentagon deal faces same safety concerns that plagued Anthropic talks." https://www.axios.com/2026/03/01/openai-pentagon-anthropic-safety

Mass Domestic Surveillance Research Document (internal research, February 2026) https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you?utm_source=tldrai

BBC News. (2026, February 24). "Hackers breach age verification partner of Discord and OpenAI, reveal hidden biometric monitoring mechanisms." Reported by multiple sources including Pravda EN: https://news-pravda.com/world/2026/02/24/2095143.html

God is a Geek. (2026, February 25). "Discord Delays Age Verification Rollout to 2026." https://godisageek.com/2026/02/discord-delays-age-verification-rollout-to-2026-backlash/

Payne, K. (2026, February 16). "AI Arms and Influence: Frontier Models Exhibit Sophisticated Reasoning in Simulated Nuclear Crises." arXiv:2602.14740. https://arxiv.org/abs/2602.14740

U.S. Department of Defense. (2023). "DoD Directive on Autonomy in Weapon Systems." https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf

NPR. (2026, February 27). "OpenAI announces Pentagon deal after Trump bans Anthropic." https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban

CBS News. (2026, February 26). "Pentagon officials sent Anthropic best and final offer for military use of its AI amid dispute." https://www.cbsnews.com/news/pentagon-anthropic-offer-ai-unrestricted-military-use-sources/

CNBC. (2026, February 27). "Sam Altman aims to 'help de-escalate' tensions with Pentagon as OpenAI employees voice support for Anthropic." https://www.cnbc.com/2026/02/27/openai-sam-altman-de-escalate-tensions-pentagon-anthropic.html