The AI Agent Builder Got Owned in 20 Hours
- Patrick Duggan
- Mar 21
- 4 min read
CVE-2026-33017: One HTTP Request. No Auth. Full RCE. And Your AI Pipeline Keys.
March 17, 2026. A critical vulnerability is disclosed in Langflow — the open-source visual builder for LangChain AI agents. CVSS 9.3.
Twenty hours later, attackers are already inside production instances.
No proof-of-concept existed yet. They built working exploits from the advisory text alone.
What Langflow Is
Langflow is the drag-and-drop interface for building AI agent pipelines. LangChain underneath, visual canvas on top. You connect nodes — LLMs, vector stores, APIs, databases — and Langflow orchestrates the chain.
It's popular. It's open-source. And until March 17, it had a public endpoint that ran arbitrary Python with zero authentication.
What CVE-2026-33017 Does
The endpoint POST /api/v1/build_public_tmp/{flow_id}/flow is designed to let unauthenticated users build public flows. That's the feature working as intended.
The vulnerability: when you supply the optional data parameter, the endpoint uses your flow data — containing arbitrary Python code in node definitions — instead of the stored flow from the database. That code gets passed straight to the Python runtime. No sandbox. No auth. No questions.
One curl command. One JSON payload. Immediate root.
The only prerequisite is knowing the UUID of a public flow. In practice, these are discoverable through shared chatbot links. And when AUTO_LOGIN=true — which is the default — the attacker can call /api/v1/auto_login to get a superuser token and create a public flow themselves.
Default config. One request. Game over.
The 20-Hour Timeline
Hour | What Happened |
0 | Advisory published (March 17, 2026) |
~8 | First automated scans hit honeypots |
~12 | 4 IPs arrive within minutes of each other, identical payloads |
~16 | Custom Python scripts appear (python-requests/2.32.3) — reconnaissance, not just validation |
~20 | Active exploitation confirmed — credential theft, stage-2 delivery via bash+curl |
48h | 6 unique source IPs recorded across honeypot fleets |
No public PoC existed. No Nuclei template existed. These were privately authored exploits, written and deployed at scale within hours of disclosure.
The attackers read the advisory and wrote the exploit faster than most organizations read their email.
What They Stole
The Langflow instances that got hit weren't empty canvases. They were production AI pipelines connected to:
OpenAI/Anthropic API keys — stored in environment variables
Database credentials — for vector stores, PostgreSQL backends
Internal API tokens — for whatever the AI agents were connecting to
Exfiltrated keys and credentials provided access to connected databases and potential software supply chain compromise. When you own the AI agent builder, you own everything the agents can touch.
The Pattern
This is the fifth data point in a pattern we've been tracking all year:
Date | Target | What Got Owned |
Jan 26 | Cisco FMC (CVE-2026-20131) | Firewall management — CVSS 10.0, 36 days as zero-day |
Feb | CrowdStrike impersonation (Handala) | EDR — wiper masquerading as security tool |
Mar 4 | n8n (CVE-2025-68613) | Workflow automation — CISA KEV |
Mar 11 | Stryker via Intune | Device management — 80K devices wiped |
Mar 17 | Langflow (CVE-2026-33017) | AI agent builder — 20-hour weaponization |
The tools you trust most are the tools most worth compromising.
Security tools. Automation platforms. AI builders. Device managers. The attack surface isn't your application — it's the infrastructure that builds your application.
What This Means for AI
Every organization building with LangChain, CrewAI, AutoGPT, or any agent framework has a Langflow-shaped risk:
Agent builders store every secret. API keys, database creds, internal tokens. Compromise the builder, get them all.
Unsandboxed code evaluation is the original sin. Langflow passed attacker-controlled Python straight to the runtime. No sandbox. This is 1999-era security in a 2026 product.
Default configs kill. AUTO_LOGIN=true as default means every fresh install is pre-authenticated for the attacker.
Twenty hours is the new timeline. From disclosure to exploitation in less than a business day. Your patch window is measured in hours, not weeks.
What To Do
1. Patch immediately. Langflow versions 1.8.1 and earlier are vulnerable. Update now.
2. Rotate everything. If your Langflow instance was internet-facing, assume compromise. Rotate all API keys, database passwords, and tokens that were configured in flows.
3. Audit outbound connections. Look for unusual callbacks to external IPs — the attackers used stage-2 delivery via bash+curl.
4. Never expose Langflow to the internet without auth. Put it behind a reverse proxy with real authentication. The built-in auth is a default-on superuser token.
5. Check our STIX feed. Langflow exploitation IOCs are being ingested into the DugganUSA threat intelligence feed. If you're consuming our STIX/TAXII endpoint, you're getting these indicators automatically. If you're not — register here.
The Bigger Lesson
We build AI agents to automate our work. We give them our API keys, our database credentials, our internal access. Then we run the builder on a public endpoint with unsandboxed code evaluation and no auth.
The AI agent revolution has a security problem. And it's not the agents — it's the tools we use to build them.
Twenty hours. That's how long you have between "vulnerability disclosed" and "you're compromised."
The tools you trust most are the tools most worth compromising. We've said it five times this year. The industry keeps proving us right.
DugganUSA tracks 1,021,000+ indicators of compromise across 42 indexes. Our STIX feed serves 10.9 million indicators to security teams in 24 countries. [Get the feed.](https://analytics.dugganusa.com/stix/pricing)




Comments