Overnight Code Generation and the Limits of AI Engineering
I woke up to production-grade code that I didn’t write.
What I _did_ do was spend 20 minutes crafting a specification prompt last night: the API endpoints, the data models, the edge cases, the deployment constraints. This morning, 847 lines of TypeScript and infrastructure as code were waiting in my repository.
The quality was… astonishing. Not perfect, but deployable. The AI had written:
- Properly typed TypeScript interfaces
- Comprehensive error handling with 14 specific cases
- Complete test suite with 92% coverage
- Docker configuration with health checks
- CI/CD pipeline for GitHub Actions
- Monitoring and alerting hooks
But here’s the thing that made me pause while sipping my coffee: I spent the first two hours not running the code, but _interrogating_ it. Checking assumptions. Validating the distributed systems model. Tracing the failure modes.
This is what Naval Ravikant meant when he said software engineers are now “among the most leveraged people alive.” The work has rotated 90 degrees. Implementation—the thing we used to charge clients for—is now the easy part. The new bottleneck is specification: knowing exactly what you want, why you want it, and how it might fail.
The New Meta-Engineering
Traditional engineering looks like:
- Plan (20% of time)
- Implement (60% of time)
- Debug (20% of time)
AI-assisted engineering looks like:
- Specify (60% of time)
- Generate (5% of time)
- Validate (35% of time)
The total time shrinks dramatically, but the _value_ has shifted entirely upstream. I can’t just say “build me a microservice that handles payments.” I need to specify how it handles retries, idempotency keys, rate limiting, fraud detection, PCI compliance, rollback strategies, and monitoring thresholds across three different environments.
Every prompt is now a miniature systems design interview with myself.
Where It Still Breaks Down
The AI confidently generated what looked like a sophisticated caching strategy. Twelve hours later, it failed spectacularly under real load because it used a naive TTL approach that broke down when cache invalidation raced with database commits—something that’s obvious to an engineer who’s spent time with distributed systems.
This is the gap: the AI understands patterns but doesn’t understand _failure_. It knows how to build, but doesn’t know how to build _for when things break_.
The Specification Economy
The emerging skill isn’t coding, but the meta-skill of knowing what good specifications _look like_. This means:
- Understanding failure modes before they happen
- Knowing which trade-offs matter for your context
- Having mental models for system boundaries
- Recognizing when AI-generated code is subtly wrong
It’s like the difference between asking someone to “cook dinner” versus asking them to “make a meal that accommodates Sarah’s nut allergy, uses up the chicken that’s about to expire, and can be ready by 7 PM when the guests arrive.”
Both requests get food on the table. Only one gets the _right_ food on the table.
The Productivity Revolution Nobody Talks About
Here’s what’s actually happening: I’m shipping 5-10x faster, but I’m also _thinking_ 5-10x harder. The cognitive load hasn’t decreased—it’s shifted upstream. Each commit is a 2000-line diff that took 20 minutes to generate and 3 hours to validate.
The tools have changed, but the essence remains: knowing which problem to solve, and solving it well.
Tomorrow night, when I sit down to plan my next feature, I’ll be writing code for the last time—as specifications that will become code while I sleep. The actual coding happened between 2 AM and 7:48 AM. I was only there for the beginning and the end.
Try This Yourself
Want to experience this shift? Start with something simple:
- Pick a boring but real problem (like a TODO API)
- Write a 200-word specification covering:
- Traffic patterns
- Failure recovery
- Data consistency requirements
- Deployment constraints
- Generate the code
- Spend 30 minutes validating it
- Deploy it
- Run a load test
- Watch what breaks
You’ll learn more about systems thinking in that hour than in a week of traditional coding.
The AI doesn’t replace engineers. It rotates the dimension along which leverage is applied.
The old superpower was writing code faster than anyone else. The new superpower is asking better questions than everyone else.
Same job. Higher leverage. Different axis.
Posted from a London flat overlooking the Thames, where the humans haven’t learned to sleep but the code has.