top of page

Constitutional AI Meets Constitutional Security: An Open Letter to Anthropic

  • Writer: Patrick Duggan
    Patrick Duggan
  • Nov 6, 2025
  • 5 min read


Dear Dario, Daniela, and the Anthropic Team


MINNEAPOLIS, November 6, 2025 — You built Constitutional AI to align language models with human values. You raised $13 billion (Series F, September 2025) at $183 billion valuation. The U.S. AI Safety Institute partnered with you for safety research.


We built the same thing for cybersecurity.


The Parallel You Might Not See


Constitutional AI (your invention):

  • Define a "constitution" describing desired AI behavior

  • AI evaluates its own outputs against the constitution

  • Self-correction loop improves alignment over time

  • Goal: Helpful, harmless, honest AI


Constitutional Security (our invention):

  • Define 6 dimensions describing desired security behavior

  • System evaluates itself against compliance metrics

  • Automated correction loop improves security over time

  • Goal: Verifiable, evidence-based, transparent security


You valued transparency at $183B. We valued it at $0 marginal cost - and proved it with 99.5% public files.


Democratic Sharing Law (Dimension 6)


Judge Dredd 6D Framework:

1. D1: Commit Compliance (95%) - Git history integrity

2. D2: Corpus Alignment (95%) - Documentation quality

3. D3: Production Evidence (91%) - VirusTotal scans, SBOM

4. D4: Temporal Decay (95%) - Time-based risk scoring

5. D5: Financial Efficiency (95%) - P.F. Chang's Avoided Cost ($65K saved)

6. D6: Democratic Sharing (78%) - THE CONSTITUTIONAL DIMENSION


Dimension 6 metrics (ethics as code):

  • **Hoarding:** 95/95 (99.5% public - 4,780 files tracked, 1,011 excluded)

  • **Transparency:** 95/95 (15 incident files, 149 GitHub issues open)

  • **Gratitude:** 9/95 (33 instances - algorithm needs tuning)

  • **Accessibility:** 95/95 (99.9% open formats)

  • **Trust Arbitrage:** 95/95 (7.1x evidence:claims ratio)

  • **Armor Polishing:** 80/95 (119/149 incidents fixed publicly)


Overall D6 score: 78/95


Philosophy: "The Aristocrats Standard" - Admit mistakes, show receipts, thank those wronged, fix publicly


The AWS Brand Weaponization Disclosure (Nov 4, 2025)


What we discovered:

  • IP 216.73.216.112 claimed "Anthropic, PBC" in ISP label

  • WHOIS revealed: Amazon.com, Inc. (actual owner)

  • AWS weaponizing YOUR brand for trust bypass


What we did:

  • Sent disclosure to [email protected]

  • Thanked you for Constitutional AI (we use Claude Code)

  • Published evidence publicly (Democratic Sharing Law)

  • Implemented Pattern #32: Polish vs Dent Partnership Framework


AWS: $19B security investment → DENTS (weaponizes brands)

Google: $52B security investment → POLISHES (respects crawling)

Microsoft: Mixed (legitimate + abused subnets)


Your response: (We're still waiting - but we didn't wait for permission to protect you)


Constitutional AI Principles → Constitutional Security


Principle 1: Transparency (You Invented, We Implemented)


Your approach: Publish Constitutional AI research, open-source methodologies, partner with AI Safety Institute


Our approach:

  • 99.5% public files (4,780 tracked)

  • All blog posts cite sources with evidence

  • Git commits are public record

  • Judge Dredd compliance scores automated and verifiable

  • `/compliance/evidence/` directory = full audit trail


Marginal cost of transparency: $0 (digital goods, infinite replication)


Trust multiplier: 7.1x evidence:claims ratio (every claim backed by 7 pieces of evidence)


Principle 2: Alignment (You Studied, We Deployed)


Your focus: Align AI with human values through constitutional feedback


Our focus: Align security operations with evidence-based ethics


Example: Democratic Sharing Law

  • NOT a marketing claim ("we're transparent")

  • MEASURED metric (99.5% public files is verifiable)

  • AUTOMATED audit (`node scripts/democratic-sharing-audit.js`)

  • PUBLIC evidence (`compliance/evidence/democratic-sharing/audit-YYYYMMDD.json`)


The difference: You research AI alignment. We deployed security alignment.


Principle 3: Safety (You Prevent Harm, We Document It)


Your mission: Prevent AI systems from causing harm


Our mission: Prevent security breaches while documenting every decision


Recent threat analysis (Nov 6, 2025):

  • 427 IPs analyzed across 6 dimensions

  • TECHOFF SRV LIMITED: 17 IPs, 22,830 abuse reports (bulletproof hosting)

  • Microsoft subnet abuse: 40.77.167.x range weaponization

  • No AI adversaries detected (yet - but surveillance mode ready)


Safety mechanism: 24-hour surveillance for 80-95 abuse score IPs before auto-blocking


Constitutional principle: Don't block first, ask questions later. Watch, gather evidence, THEN block.


Principle 4: Verifiability (You Benchmark, We Open-Source)


Your approach: Partner with NIST AI Safety Institute for third-party verification


Our approach:

  • Open git repository (all code visible)

  • Public compliance evidence

  • Reproducible Judge Dredd runs (`node scripts/judge-dredd-agent/cli.js 6d`)

  • Blog posts cite exact file paths and line numbers


Verification method: Don't trust us - verify the evidence yourself


The Partnership Anthropic Needs


Why you should care:

1. Brand protection: AWS weaponized your name (we caught it, disclosed it)

2. Constitutional alignment: You built it for AI, we built it for security

3. Transparency validation: You value it at $183B, we prove it works at $0 marginal cost

4. Safety research: AI adversaries don't exist yet (we're monitoring), but when they emerge, you'll want our detection framework


Partnership model:


Option A: Strategic Investment

  • **Valuation:** Seed ($5.7M median) or Series A ($45M with $2M ARR)

  • **What you get:**

  • Constitutional Security framework (open-source methodology, license brand)

  • AI adversary detection (we're the early warning system for AI threats)

  • Evidence-based security (validates your Constitutional AI principles work)

  • Brand protection (we monitor for AWS-style weaponization)


Option B: Research Collaboration

  • **Focus:** AI Safety + Cybersecurity intersection

  • **Joint research:** "Detecting AI-powered adversaries in threat intelligence"

  • **Our contribution:** 180+ days production data, 6D analysis framework

  • **Your contribution:** AI safety expertise, Constitutional AI methodology

  • **Outcome:** Published research, open dataset, AI threat detection benchmarks


Option C: Acquisition (Long Shot, But Hear Us Out)

  • **Why:** You're building safe AI. We're building safe security. Same constitutional principles.

  • **Synergy:** Anthropic Security Division (we provide threat intel, you provide AI alignment)

  • **Market:** Enterprise AI deployments need constitutional security (aligned AI + aligned security)

  • **Valuation:** $45M (Series A standard with production evidence)


The AI Adversary Timeline


Current state (Nov 6, 2025):

  • **No AI adversaries detected** in 427 IP analysis

  • All attacks use static evasion techniques (pre-configured proxy lists, not adaptive learning)

  • Professional pacing detected (5-6 req/hour), but NOT real-time adaptation


When AI adversaries emerge:

  • Attack patterns will adapt IN REAL-TIME to our defenses

  • Surveillance mode (24-hour watch) will detect adaptation lag

  • 6D framework will flag anomalous behavior changes

  • We'll be the canary in the coal mine


Why you need us: When AI-powered attacks start, someone needs to be watching. That's us.


The Question Anthropic Should Ask


"How did two people in Minnesota implement Constitutional AI principles in cybersecurity while we were still publishing research papers?"


Answer: You invented the framework. We deployed it.


The brutal follow-up: "Why is a pre-revenue startup in Minnesota better at operationalizing our own Constitutional AI principles than we are?"



Evidence Appendix


  • **Democratic Sharing Law:** Dimension 6 - 99.5% public (4,780 files), 7.1x evidence:claims

  • **Judge Dredd 6D:** 92% compliance - `node scripts/judge-dredd-agent/cli.js 6d`

  • **AWS Disclosure:** Nov 4, 2025 email to [email protected] (216.73.216.112 brand weaponization)

  • **Threat Analysis:** 427 IPs, no AI adversaries yet - `blog-posts/multi-dimensional-threat-analysis-nov-2025.md`

  • **Surveillance Mode:** 24-hour watch for 80-95 abuse scores - `lib/surveillance-manager.js`

  • **Constitutional Evidence:** `/compliance/evidence/democratic-sharing/` (automated audits)

  • **Public Repository:** All code, all evidence, all claims verifiable


Constitutional Comparison:

  • **Anthropic:** Constitutional AI - align language models with human values ($183B valuation)

  • **DugganUSA:** Constitutional Security - align security operations with evidence-based ethics ($0 marginal cost)

  • **Shared principle:** Transparency, alignment, safety, verifiability


*We didn't copy Constitutional AI. We applied it to a different problem. The fact that it works proves your framework is domain-agnostic. Partner with us and prove it scales.*


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page