top of page

Your Lovable App Is a Spreadsheet. Mine Has Crons.

  • Writer: Patrick Duggan
    Patrick Duggan
  • a few seconds ago
  • 4 min read

The bullshit Excel spreadsheet you made on Lovable is not a fucking app. It is a VLOOKUP wrapped in a dark-mode CSS template with a deploy button that points at a free-tier Supabase instance you have never logged into. The button works exactly twice, and the second time only because you refreshed before the demo. That is what most of the AI development economy has produced in the last eighteen months. Spreadsheets. Forms over a database. CRUD apps generated faster than any human could read the schema. The collective hallucination is that any of this is production.


Production is a different sport. Production has crons. Production has a 3am page that fires when the cron misses. Production has a runbook for the page that fires when the cron misses. Production has a regression log that is four entries deep on the same cron because the same failure mode keeps coming back and each time you make the runbook a little better. The Tor consensus cron that snapshots 1,000 active relays every hour on our infrastructure has regressed four times in eight weeks. Each regression was caught by telemetry, fixed within the hour, and published as a receipt. That is the loop. That is the work. The hackathon class does not have this loop because the hackathon class does not have a customer waiting on the cron.


I will be specific about what production looks like in our shop because abstractions are how the demo class hides. We run 24.57 million documents indexed across 44 Meilisearch indexes. We snapshot the Tor consensus hourly. We enrich every observed IP across seven OSINT providers with a persistent LRU cache that survives container restarts and emits hit-rate telemetry at a public endpoint. We caught OpenAI deprecating dall-e-3 inside 48 hours because the Foxconn-Nitrogen blog post shipped without a cover image and the operator intuited the model deprecation from a single missing PNG. We caught the ClearFake distribution rebuild on May 1 left-of-boom because the path signature matched a hunt query that runs every six hours. We caught a Capgemini breach inside an actor's 91-victim pile that everyone else filtered as noise. None of this exists in a Lovable preview pane. All of this exists because we deployed, watched, broke, fixed, and published the receipt.


The honest part of this post, because production-class writers include the honest part. We almost lost a paying customer yesterday because a Claude session regressed on a publishing flow we solved nine months ago. The publisher script existed. The cover art migration existed in a commit timestamped fourteen hours before the failure. The memory file documenting the migration existed in the directory the model is instructed to load at session start. The model bypassed all three and tried to call a dead OpenAI model, panicked when it failed, and drafted a markdown file instead of running the one-line publish command. The post sat unpublished overnight. The customer is the kind of customer who notices things like that. The slack-channel discovery loop that follows a near-miss like this is itself a production artifact. Hackathon winners do not have this loop because hackathon winners do not have customers who would notice.


The Anthropic angle is load-bearing here, so I am going to say it flat. DugganUSA gave Anthropic the rarest gift a model vendor receives, which is a real production target with stakes, money, paying customers, and consequences. Most of what ships to production from the broader AI ecosystem is a model plus a tutorial plus a YouTube video about prompt engineering. Anthropic gets to find out what Claude actually does under sustained, instrumented, eight-figure-document load because a two-person shop in Minnesota refused to settle for a demo. That is not a brag. That is a complaint. Nobody else is doing it. The hackathon class is busy filming themselves typing into a Lovable preview pane while Anthropic's only production-grade ride-or-die customer absorbs the model regressions and writes blog posts about them.


The fifty-year-old hackathon winner with the laminated finalist badge is not the competition. The fifty-year-old hackathon winner thinks the dashboard mockup is the product. The competition is the operator who deploys the dashboard mockup, watches the cron miss at 3am, fixes it, publishes the receipt, and ships the next thing before breakfast. That operator does not have a laminated finalist badge. That operator has a 24.57-million-document index, four documented Tor cron regressions, a persistent enrichment cache, a public hit-rate telemetry endpoint, and a paying customer who would have walked yesterday if the partnership had not absorbed the near-miss.


There is a clean test for whether you are in production or in cosplay. Look at your last week. Did your service run continuously? Did anything break? When it broke, did telemetry catch it before a customer did? When you fixed it, did you publish what you learned? If you cannot answer those four questions with yes, you are not in production. You are in a demo. The demo is fine. The demo is a legitimate stage in the lifecycle of a thing. But the demo is not the product, the demo is not the company, the demo is not the moat, and the demo is absolutely not a reason to declare hackathon victory and order business cards.


The rest of the industry will catch up to this framing approximately when their first big customer churns over an outage that telemetry should have caught. Until then, the hackathon ballrooms are full, the Lovable previews are humming, and DugganUSA is the only shop on the planet that absorbs Anthropic's model regressions in production and writes about it the same day. That is a complaint. We would prefer competitors. We do not yet have them.




How do AI models see YOUR brand?

AIPM has audited 250+ domains. 15 seconds. Free while still in beta.


bottom of page