In 2013, I wrote about trashing servers and burning code. The argument was simple: systems that mutate while running accumulate state, history, and uncertainty in ways humans can't reason about. When something breaks, nobody knows which change caused it or what the system actually is anymore.
So we stopped patching servers and started replacing them. We built machines that could burn down and rise again, identical in behavior, without human intervention. The server wasn't the thing. The capability to regenerate was the thing.
That was an infrastructure principle, but it has always felt true to me for software. I was CTO for the company behind the popular Wunderlist productivity tool at the time, and as CTO I came up with a simple set of rules for choosing technologies we deployed at work:
anyone can decide to use any new language or framework they want, but
it must work with our build system,
it must work with our deployment system,
they must find at least one other person on the team to work on it with them and support it if necessary and
(this is the most important part) the code has to be no more than "this big", which I'd say while holding up my hand with my fingers spread apart a few inches.
That last part constrained the code in such a way that the worst thing that could happen with a new language or technology is that it crashed, nobody on call was able to fix it, and it would be trivial to rewrite and replace. And we did that sometimes.
Code could even be treated like cells. As humans, parts of our biological material are dying all the time, yet the system (our body, brain, mind) remains.
So today, if code can be regenerated cheaply, perhaps upgrading code in place is the antipattern.
Infrastructure Figured This Out First
Immutable infrastructure wasn't adopted because it was elegant. It was adopted because mutable systems failed in ways that were hard to diagnose, hard to reproduce, and hard to roll back. Snowflake servers. Configuration drift. Hand-applied fixes. Tribal knowledge baked into machines nobody could recreate.
Replacing machines instead of fixing them solved this not by making systems smarter, but by making them simpler to reason about. Each deployment was a clean slate. Each artifact was knowable.
The key insight was almost more economic than technical: mutation accumulates hidden cost faster than replacement does.
That insight is now true for application code.
Editing Code Is Mutation
When you edit code in place, you're doing the software equivalent of SSHing into a production server and tweaking a config file.
You're assuming you understand the full state of the system. You're assuming the change is local, that history doesn't matter, that side effects are predictable.
Those assumptions were always shaky. They're becoming untenable. As code is generated more rapidly, whether by humans, AI, or both, the mutation rate increases while the understanding rate stays flat or declines.
Every in-place edit is a drift event. AI just makes this visible by compressing the timeline.
Mutable Code Accumulates Entropy
In-place modification has a hidden cost profile. Incremental edits entangle intent with the sequence of changes that produced them. Code gets layered atop code (this is why developers often prefer to use git rebase instead of git merge). Local fixes obscure global behavior. Understanding requires replaying the evolution of the codebase in your head — archaeology instead of engineering.
This is exactly how legacy systems are born. Not through age, but through mutation. A system becomes legacy when understanding it requires historical knowledge that isn't encoded anywhere except the code itself.
The tragedy is that teams recreate this failure mode faster with AI, because mutation feels cheap while understanding quietly becomes expensive. You can generate a thousand lines in seconds. But the moment you start editing those lines, you've created an artifact that can only be understood historically. You've created brittle legacy code in an afternoon.
Replacing code avoids this entirely.
The Phoenix Principle
What made immutable infrastructure work wasn't really about servers. It was about a property: the ability to burn something down and have it rise again, identical in behavior, without human intervention or institutional memory.
That property—call it the phoenix principle—is what makes systems understandable at scale. Not documentation. Not code comments. Not the engineer who remembers why that conditional exists. The ability to regenerate from specification.
Applied to code, this means: if you can't regenerate a component from its specification and evaluation criteria, that component is not well-defined enough to exist.
That's not cruelty. That's feedback. The fire tells you what you actually knew versus what you only thought you knew.
Replace-over-modify systems behave differently. Each regeneration is explicit. Each deployment is intentional. Rollback is trivial. Drift cannot accumulate. The system burns and is reborn, but its identity persists because its behavior is externally defined.
Why This Works Now
Historically, we avoided full replacement because writing code was expensive, coordination was slow, re-testing everything was painful, and human review was the bottleneck.
AI changes the cost of generation. Testing is automated. Coordination happens through interfaces.
But the deeper shift is this: comprehension became the bottleneck.
The entire history of software engineering has been about making code easier to understand. Style guides, design patterns, clean code, self-documenting functions — all of it assumed that humans would read and reason about implementations. We optimized for readability because reading was mandatory.
Immutable code sidesteps that problem. If a component can be regenerated from spec, understanding its implementation is optional. You need to understand the contract, the interface, the expected behavior. You don't need to understand how it achieves that behavior, because the "how" is transient.
The expensive thing left is defining what you want. Comprehension of implementations becomes a debugging activity, not a maintenance activity.
What Survives Replacement
If code is immutable, something else must carry continuity.
That something is: interfaces, contracts, evaluations, monitoring, and data. These are the stable layers. Code is a transient expression of them.
This mirrors infrastructure perfectly. AMIs mattered less than APIs. Containers mattered less than contracts. Servers mattered less than services.
The thing you cared about was never the machine. It was what the machine did and how you could verify it was doing it correctly.
Software is catching up to the same realization. The code is not the asset. The specification and the evaluation are the asset. Code is just the current rendering.
Objections
"This is wasteful." Mutation is wasteful. It just hides the cost in future debugging, onboarding, and incident response. Replacement is explicit cost with bounded risk.
"We'll lose optimizations." If an optimization matters, encode it as a constraint or invariant. If you can't express it formally, it probably wasn't real value — it was accident.
"What about institutional knowledge?" This is the real anxiety. The code embodies decisions nobody wrote down. But that's precisely the problem immutable code solves. If knowledge only exists in the implementation, it's not knowledge. It's risk. Regeneration forces you to make the implicit explicit, or accept that it wasn't essential.
"This won't work for large systems." Large systems already replace infrastructure constantly. Code is next. The hard part is decomposition, not replacement.
"This breaks developer intuition." So did containers. So did CI. So did version control. So did every advance that traded local convenience for systemic clarity.
The Rule, Updated
The old rule was: never upgrade infrastructure in place.
The new rule is: never upgrade code in place if you can regenerate it instead.
Just like SSHing into a server and tweaking something in production is still possible but clearly undesirable, editing code is now a last resort, a sign that regeneration failed, that your specification was incomplete, that your evaluations weren't sufficient. It's a debugging activity, not a development activity.
The Payoff
Immutable code yields predictable deployments, lower cognitive load, cleaner rollback, easier audits, faster evolution, and smaller blast radius.
But the real payoff is psychological. You stop being afraid of change. You stop tiptoeing around legacy decisions. You stop asking "what will this break?" and start asking "does this pass the evaluation?"
The code becomes a renewable resource instead of a fragile artifact.
Infrastructure taught us that mutability was the enemy of understanding.
AI teaches us the same lesson again — higher up the stack.
If you're still editing AI-generated code in place, you're reliving the worst era of configuration drift, just faster. You're creating legacy systems in days instead of years.
Burn it. Regenerate it. Trust what survives the fire.