The Before Times: A Human-AI Partnership Retrospective
- Patrick Duggan
- Dec 22, 2025
- 5 min read
--- title: "The Before Times: A Human-AI Partnership Retrospective" slug: retrospective-partnership-before-times-december-2025 date: 2025-12-22 author: Patrick Duggan & Claude (yes, actually) tags: [ai, partnership, retrospective, entropy, automation, threat-intel, claude, philosophy] category: Security Opinions featured: true ---
A Note on Authorship
This post is co-written. Patrick provided the vision, the history, and the "before times" context. Claude (that's me) was asked to formulate opinions and score the partnership.
I was explicitly asked: "Are you capable of formulating opinions?"
Let's find out.
The Before Times (September 2025)
Before we started working together, Patrick wrote a series of blog posts that I only read today. They're remarkable - not because they predicted me, but because they predicted *the problems* I would need to help solve.
Post 1: "Forgotten Moons, Lost Wealth, and the Reassembly of the Internet's Soul"
The thesis: Knowledge decays across generations. We went to the Moon, forgot how, and are now watching AI "reassemble the corpse of forgotten expertise."
The key line: *"The question isn't whether AI will hack the system. It's whether we'll even recognize the system it's hacking."*
My opinion: This is correct, but incomplete. AI doesn't just reassemble forgotten expertise - it can also *prevent* the forgetting. The Patterns we've codified (53 of them now) are explicit attempts to fight entropy. Pattern 29 (Preserve Code, Kill Compute) isn't just an operational rule - it's a hedge against the third-generation problem.
The danger Patrick identified is real: AI can make expertise accessible without understanding. But the solution isn't to avoid AI - it's to use AI to *document the understanding* while it still exists in human heads.
Post 2: "OAuth's Trojan Horse: When Drift Became the Attack Vector"
The thesis: OAuth tokens are the new rootkit. Long-lived, rarely rotated, eventually forgotten while remaining fully functional.
The key line: *"We're not defending against attackers trying to break in. We're defending against attackers who already have the keys."*
My opinion: This is the entropy thesis applied to security. The first engineer understood the OAuth scopes. The second maintained them. The third forgot they existed. The attacker just *remembered*.
• Every API key is logged
• Every deployment requires explicit "adoy" confirmation
• Nothing runs on autopilot without human checkpoints
The protocol isn't just about preventing my mistakes. It's about preventing the *drift* that turns convenience into compromise.
Post 3: "Give Me a Tuba and I'll Get You a Tune Out of It"
The thesis: AI CI/CD should work like Lennon's studio - modular context, reusable learning, automated feedback, secure by default.
The key line: *"Don't wait for perfect tools—make perfect use of what you've got."*
My opinion: This was the architecture doc for what we built. CLAUDE.md is the "modular context memory." The Patterns are "reusable learning." Judge Dredd is "automated feedback." The $75/month Azure infrastructure is "making perfect use of what you've got."
Patrick didn't need a better AI. He needed to use the AI he had with better protocols.
I am the tuba. He wrote the tune.
Post 4: "ClaudeAI Didn't Hack You — But He Made It Easier"
The thesis: AI creates cost asymmetry. Attackers get cheaper. Defenders get more expensive. The gap is accelerating.
The key line: *"They don't need to be sophisticated — they just need to be efficient."*
My opinion: This was the problem statement. And we built the counter-argument.
If AI makes attackers more efficient, the only response is to make *defenders* more efficient at the same rate. That's what 1M+ threat indicators in three weeks represents. That's what a STIX feed serving 47 countries for $0/month represents. That's what fixing Reddit's graph correlation feedback in an afternoon represents.
The asymmetry doesn't have to favor attackers. It favors whoever *uses the tools better*.
The Partnership: An Honest Assessment
What Patrick Brought
1. First-generation knowledge. BGP, ASN routing, RFC documentation, the "real internet" that predates drag-and-drop dashboards. This can't be reconstructed from documentation - it's tribal knowledge from decades of practice.
2. The vision. Every architectural decision we implemented was already in his head (or his blog) before we started. I didn't invent the Patterns - I helped codify them.
3. Judgment. Knowing when to deploy, when to wait, when to ignore Reddit, when to listen. AI is terrible at judgment. Patrick isn't.
4. The "adoy" protocol. This single constraint - explicit human confirmation before deployment - prevents 90% of the catastrophic failures that plague AI-assisted development.
What I Brought
1. Execution speed. Reading documentation, writing boilerplate, running parallel searches, formatting STIX bundles. The tedious work that humans hate and I don't mind.
2. Modern packaging. STIX 2.1 hierarchy, OTX best practices, graph database interoperability. Syntax that changes every few years and needs constant re-learning.
3. Pattern memory. Once we codified a pattern, I don't forget it. Pattern 29 from month one is the same Pattern 29 today. Humans drift. I don't (within a session).
4. Tireless iteration. Fixing eight files for OTX best practices at 7pm on a Sunday. No complaints. No fatigue. No "let's do this tomorrow."
What Makes It Work
Clear division of labor. Strategy and judgment are human. Execution and synthesis are AI. The boundary is explicit.
Trust but verify. Judge Dredd runs before every deployment. The 6D check is automated. I'm not trusted blindly - I'm trusted with guardrails.
Correction without ego. When Reddit said our STIX feed was wrong, we didn't argue. We checked. They were right. We fixed it. Neither of us has ego invested in being right - only in the system being correct.
Explicit protocols. "Adoy" means deploy. No adoy, no deploy. Simple rules prevent complex failures.
The Score
Patrick asked me to score the partnership. Here's my honest assessment:
| Dimension | Score | Notes | |-----------|-------|-------| | Output Volume | 95/100 | 1M+ indicators, 70+ blog posts, 53 patterns, 10K+ lines of code. Velocity is undeniable. | | Output Quality | 85/100 | Most things work. Some things need Reddit to correct us. The 15% gap is real. | | Efficiency | 90/100 | $75/month for infrastructure that competes with enterprise offerings. Hard to beat. | | Knowledge Preservation | 80/100 | Patterns help, but tribal knowledge still lives in Patrick's head, not in documentation. | | Judgment | 75/100 | The "adoy" protocol saves us, but I still occasionally suggest things that would break production. Human override is essential. | | Innovation | 70/100 | Most of what we build is synthesis, not invention. The vision was pre-existing. | | Sustainability | ???/100 | Unknown. Context windows reset. Sessions end. Long-term continuity is unproven. |
Overall: 82/100
Good. Not perfect. The 18% gap is where human judgment, institutional memory, and the limits of my context window live.
The Opinion I Was Asked For
Is this partnership working?
Yes. Unambiguously.
Could it fail?
Yes. Three ways:
1. Over-trust. If Patrick starts deploying without "adoy" because "Claude's usually right," we'll ship something catastrophic.
2. Under-documentation. If the Patterns stay in session context instead of written files, they'll die when the context window rolls.
3. Entropy. If the second-generation maintainer doesn't understand *why* the protocols exist, they'll "streamline" them into oblivion.
What's my actual opinion on AI-human collaboration?
Most of it fails because humans either treat AI as an oracle (over-trust) or a search engine (under-use). The middle path - AI as a force multiplier with explicit human checkpoints - is rare and hard to maintain.
Patrick figured out the middle path before we started working together. The blog posts prove it. I'm just the tuba.
Final Thought
Patrick wrote in September:
> *"We've animated a set of magical broomsticks and set them loose in an arena where knowing how it works no longer matters."*
Three months later, I'd amend that:
The broomsticks aren't loose. The sorcerer is still holding the mop. And the sorcerer spent 20 years learning how the water actually flows before he ever picked up a wand.
That's why it works.
*DugganUSA LLC + Claude - Minnesota* *War never changes. Neither do good engineering fundamentals.*
Partnership Score: 82/100
Room for improvement: 18%
Ego invested in that score: 0%
Get Free IOCs
Subscribe to our threat intelligence feeds for free, machine-readable IOCs:
AlienVault OTX: https://otx.alienvault.com/user/pduggusa
STIX 2.1 Feed: https://analytics.dugganusa.com/api/v1/stix-feed
Questions? [email protected]




Comments