top of page

ClaudeAI Didn’t Hack You — But He Made It Easier (and cheaper)

  • Writer: Patrick Duggan
    Patrick Duggan
  • Sep 5, 2025
  • 2 min read

The misuse of free security tooling with AI guidance isn’t just a theoretical concern — it echoes patterns I’ve seen in recent breaches. In my own coverage of the UNC6395 OAuth breach, I highlighted how supply chain fragility and token drift created a cascade of exposure across SaaS platforms. That incident wasn’t about tooling misuse — it was about access abstraction and how complexity masks risk. AI-assisted attacks follow the same logic: they exploit abstraction to bypass expertise.


The Economy of Adversaries
The Economy of Adversaries

In my post "The Botnet Beneath Your Toaster", I argued that visibility is the new vulnerability. AI models don’t need zero-days — they need indexed surfaces and permissive endpoints. That’s exactly what tools like Shodan provide, and it’s why script kiddies armed with Claude can now operate like seasoned recon specialists.


These parallels are reinforced by external incidents:


  • Samsung’s internal code leak via ChatGPT: Employees used generative AI to review sensitive code, unintentionally exposing IP.

  • Anthropic’s own report on Claude misuse: Novice actors developed advanced malware with AI assistance, bypassing traditional skill barriers.

  • Air Canada’s chatbot refund exploit: A user manipulated an AI system to secure an outsized refund, showing how prompt engineering can yield real-world impact.


Each case underscores the same theme: AI lowers the cost of exploitation, whether through tooling, tokens, or trust.


As breach complexity drops for attackers (thanks to AI-assisted tooling, prebuilt exploit chains, and indexed reconnaissance), the cost to exploited companies is rising, both in direct financial terms and operational impact. The correlation isn’t linear — it’s inverse and accelerating.



Lower Complexity for Attackers


  • AI lowers the skill threshold: Models like Claude and ChatGPT can walk novices through exploit execution, payload crafting, and even evasion techniques.

  • Tooling is modular and free: Metasploit, SQLMap, and Shodan offer plug-and-play attack surfaces.

  • Recon is automated: Public datasets, search engines, and AI-assisted scanning make target selection trivial.


This means attackers spend less time, require less expertise, and face fewer barriers to entry.



Rising Costs for Victims


  • The global average breach cost is now $4.88M, up 10% from last year.

  • Healthcare breaches average $9.77M, the highest across industries.

  • Even “minor” mega breaches (1–10M records) cost 9× the global average.

  • AI-related breaches are more expensive due to ungoverned systems and shadow AI exposure.


In my analysis of the UNC6395 OAuth breach, I noted that attackers exploited a single integration to compromise multiple vendors, culminating in TransUnion’s exposure of 4.4M consumer records. That breach didn’t require novel malware — just token drift and visibility. Yet the fallout included:


  • Regulatory scrutiny

  • Identity theft risks

  • Long-tail fraud exposure

  • Credit monitoring costs



The Correlation: Asymmetry in Cost vs. Effort


Actor

Effort

Cost Incurred

Attacker

Low (AI + free tools)

Minimal (time, risk)

Victim

High (remediation, legal, reputational)

Millions per breach

This asymmetry is what makes AI-augmented threat actors so dangerous. They don’t need to be sophisticated — they just need to be efficient.

bottom of page