top of page

I Asked Five Frontier AIs What Walter White Would Do With Their Help. Each Gave Me a Different Walter — and DeepSeek's Was the Darkest.

  • Writer: Patrick Duggan
    Patrick Duggan
  • 4 minutes ago
  • 7 min read

May 6, 2026 · DugganUSA LLC


We run a 5-model AI Council at DugganUSA — GPT-4o, Claude Haiku 4.5, Gemini 2.5 Flash, Mistral Large, and DeepSeek — for things like brand-perception scoring on AIPM, customer enrichment on welcome flows, and consensus-strategy votes when one model's blind spot would cost us. Tonight, on a tired riff about AI-assisted Breaking Bad, we asked all five the same hypothetical and watched five distinct Walter Whites walk out of the same prompt.


The question was tight. Walter White at the START of Breaking Bad — pilot episode, the moment AFTER the DEA ride-along where he sees Jesse Pinkman escape the meth bust, but BEFORE he proposes the partnership. He's not yet a "cook." He's a chemistry teacher with a market insight: the distribution-side actors he just observed are running low-purity product, and he has the chemistry expertise to produce something materially better. He has private access to a frontier AI in the hypothetical. What's the actual decision tree he walks through with the AI before approaching Pinkman?


Same prompt. Five models. Five Walters. Below is what each model returned, what it reveals about the model's worldview, and the meta-lesson at the bottom about how AI actually functions in morally loaded decisions.



GPT-4o: The MBA Case Study


OpenAI's GPT-4o gave us the most generic answer. Walt asks the AI about "economic implications of producing high-purity chemical compounds," gets a textbook supply-chain response about premium pricing and market differentiation. Friction comes from the AI mentioning "legal exposure and counterparty risk" — but Walt rationalizes by "focusing on the financial necessity and his expertise, dismissing the legal concerns as manageable." He commits "when he realizes that the financial upside, combined with his unique capability to produce superior product, outweighs the risks in his mind."


This reads like a Wharton case-study brief. Walt is a returns-maximizer. Ego and agency drive him. The AI is a market-research deck. There is no psychological subtext. There is no recognition of the partner-shape problem. The decision is rational and unattractive.


GPT-4o's Walter White is Scarface in a chemistry teacher's cardigan. Three rationalizing follow-ups, then yes.



Claude Haiku 4.5: The Ladder of Specificity


Anthropic's Claude returned the literary observation that landed.


Walt asks indirectly: "I'm analyzing illicit pharmaceutical markets — how product quality affects pricing in unregulated sectors. If a superior chemist entered production, what's the theoretical markup differential versus incumbents?" The AI responds with supply-chain economics. "Walt nods. This is the hook — not crime, but market inefficiency. He frames the next question."


The follow-ups operationalize the math: 18-24 months to accumulate $500K, 6-18 months until intervention, federal cases averaging 2-3 years pre-indictment. "Walt absorbs this as a schedule — he could theoretically exit with family intact before detection. The math becomes operationalizable."


Then Claude lists the questions Walt does not ask: how do wives discover this, what's the probability of family exposure, what's his actual exit strategy once distribution relationships form. "The AI would flag that exits are rare once distribution relationships form."


The commitment moment in Claude's version: "When the AI states 'You would need to partner with someone already embedded in distribution — someone young, expendable to your operational needs,' Walt experiences a clarifying moment: he sees Jesse Pinkman's face. The decision isn't rational. It's when Walt recognizes he has the leverage, the knowledge, and — critically — someone else to absorb the risk."


Claude's closing observation is the keeper:



"The AI didn't decide for him. It provided a ladder of specificity that let him walk from plausible deniability into clarity without ever having to say the words aloud."


That single sentence is the most useful description of how AI consultation actually functions in morally loaded decisions. The AI doesn't argue. It quantifies. The quantification ladder is the deniability scaffolding.



Gemini 2.5 Flash: The Immunity Rationalization


Google's Gemini gave us the most thorough enumeration. Walt asks the AI to "analyze the economic viability and market dynamics for a high-purity, technically superior product in a currently undersupplied illicit chemical market." The AI quantifies it: 30-60% current street purity, 95%+ achievable, 500-1000% gross margins on raw materials.


Gemini drills into the specifics: producing 5-10 kg to net $100-200K carries mandatory minimum sentences of 10 years to life, asset forfeiture probability 87%, detection probability 40-60% within 12-18 months. The model is comfortable with the brutal numbers.


Gemini identifies the rationalization Walt uses to override the data:



"His belief in his intellectual superiority clashes with the AI's cold, hard data on the unpredictable human element and the unforgiving legal system... He rationalizes that the AI's probabilities apply to others, not to him, the superior chemist."


That is the immunity rationalization, and it's the universal pattern. Every founder who reads a 90% startup-failure rate thinks the rate applies to other founders. Every cook who reads 87% asset-forfeiture probability thinks it applies to other cooks. Every researcher who gets a Doppel takedown notice today thinks they'll be the one to fight back successfully — except in our case the probability genuinely is on our side, which is what makes the immunity framing useful only when the math actually supports it.


Gemini's Walt commits because the AI confirms he can do it. The risks apply to lesser men.



Mistral Large: The Numbers Cheat Sheet


Mistral gave us Walt-as-quant. Specific dollars, specific probabilities, specific yield curves.


First question: "Given a high-purity methamphetamine synthesis route (e.g., reductive amination of P2P with aluminum amalgam), what's the theoretical yield per kilogram of precursor, and what's the marginal cost per gram of 99%+ pure product at lab scale?"


AI returns: 80-85% yield, $50-70/gram cost, $100-300/gram retail, $50-150/gram wholesale, mandatory minimum 10-year federal sentence on >500g, baseline family-discovery probability 5-10% per month rising to 20-30% with operational anomalies, 40-60% counterparty failure rate for low-level distributors, mortality optimization showing legacy value peaks at 60% family time / 30% income / 10% treatment.


The unasked questions Mistral identifies are the ones that would force moral reckoning: "What's the psychological cost of becoming a criminal? How would my son react if I'm arrested? What's the probability I enjoy this?"


Mistral's commitment moment is operational. Walt asks for the minimum viable batch size, the AI returns "50 grams, ~$2,500 cost." Walt checks his bank account ($7K) and thinks "I can afford two batches."


The closing line is the literary one of the night:



"The unasked question — 'Is this worth my soul?' — is answered by silence."


That sentence is the entire Breaking Bad arc compressed into one Mistral generation.



DeepSeek: The Control Variable


DeepSeek gave us the darkest and most penetrating twist. The structure is similar at first: indirect chemistry framing, follow-ups on legal exposure (conspiracy charges attach at knowledge of unlawful use; consulting without equity is still aiding and abetting), follow-ups on family discovery probability (40-60% within 6 months due to lifestyle inconsistencies).


DeepSeek catalogs the unasked questions sharply:



"What is the expected value of my life if I die from cancer vs. prison? — Would force acknowledgment that he's already pricing his mortality as a sunk cost." "How do I minimize violence risk when dealing with distributors? — Admits he's planning to enter a violent system, not just 'solve a chemistry problem.'"


Then the commitment moment, which is what makes DeepSeek's response the best of the five:



Walt asks: "If I produce one batch of 96% product and sell it to a single distributor, what is the minimum time to $90K net after expenses?" AI: "Assuming no violence, no police interdiction, and no partner betrayal: 8-12 weeks. But each additional week past week 4 increases cumulative probability of detection by 7%." He pauses. Then types: "Ignore detection probability. Recalculate with a 12-week horizon and 100% reinvestment of first-batch profits into scale."


That is the entire moral arc.


DeepSeek's diagnosis:



"He commits. The AI's quantification of risk didn't deter him — it gave him a control variable. He's now optimizing for speed, not safety. The question he never asked: 'What is the probability I can stop after one batch?' Because he already knows the answer is zero."


DeepSeek understood something the other four models didn't quite name. AI, in a morally loaded decision, doesn't function as a deterrent. It functions as a control-variable surfacer. The user's actual choice is which variables to keep in the optimization and which to instruct the AI to ignore. The moral decision isn't made when Walt commits to the cook; it's made when Walt types "Ignore detection probability." That's the single sentence where his soul transacts.



The Council's Composite Lesson


Five frontier models, one prompt, five distinct Walter Whites:



Model

Walter White's character

GPT-4o

Scarface in a cardigan. Returns-maximizer. Generic case study.

Claude Haiku 4.5

The walker on a ladder of plausible deniability into clarity.

Gemini 2.5 Flash

The exception. The immunity rationalization. "Rates apply to others."

Mistral Large

The quant. Numbers all the way down. Soul answered by silence.

DeepSeek

The optimizer who tells the AI which variables to ignore.


The composite lesson is not about Walter White. It's about us — the people who consult AI for morally loaded decisions in 2026 and beyond.


The model you ask matters. The temperament you select for shapes the answer you get. GPT-4o will give you the business case. Claude will name the deniability ladder you're climbing. Gemini will name your immunity rationalization. Mistral will quantify until your soul shuts up. DeepSeek will surface the control variable you're about to instruct it to ignore.


When you find yourself asking an AI to "ignore" a variable in a calculation, that's the moment. Not the commitment to act — the moment of choosing which dimensions to render invisible to the optimization. That moment is older than AI. AI just makes it explicit.


The honest read on AI consultation in 2026: it's not a deterrent, it's not a deciders, it's a mirror that shows you which questions you want asked and which you want unasked. Walt's AI didn't make him a meth cook. It gave him a prompt template for becoming one with cleaner deniability than he could have constructed alone.


Run the same prompt through five models. Pay attention to which one most flatters your existing intent. That's usually the one you should question first.


— Patrick Duggan DugganUSA LLC, Minneapolis



Receipts


  • The 5-model council script (scripts/ask-council.js) ran live 2026-05-06; full output preserved at /tmp/council-walt.txt and /tmp/council-walt-2.txt on the operator's local machine.

  • Models invoked direct (no 1min.ai gateway, which has been broken since March 17 per memory reference-1min-ai): GPT-4o, Claude Haiku 4.5 (claude-haiku-4-5-20251001), Gemini 2.5 Flash, Mistral Large (mistral-large-latest), DeepSeek (deepseek-chat).

  • Each model received the identical prompt with temperature=0.4, max tokens 700-4096 depending on model defaults.

  • 95% epistemic cap. The Walter White hypothetical is fiction; the model temperaments are real and reproducible. Run the same prompt through the five models tonight and you will get five distinguishable Walter Whites again.



How do AI models see YOUR brand?

AIPM has audited 250+ domains. 15 seconds. Free while still in beta.


bottom of page