Software is entering a strange new phase.
For most of its history, code was expensive to produce and cheap to keep. We treated it like a durable asset: written carefully, maintained lovingly, upgraded cautiously. Whole professions, identities, and institutions grew around this assumption. Programmers were craftsmen. Codebases were cities. Refactoring was urban renewal.
Generative AI breaks this assumption at the root.
Code is no longer scarce. It is abundant, fast, and increasingly disposable. The limiting factor is no longer writing software, but understanding, evaluating, and governing it. The economics have inverted. The psychology hasn’t caught up.
This publication exists to explore what comes next.
Durability Through Disposability
The central paradox I want to explore is simple:
The most durable systems of the AI era will be built from code that is meant to die.
When code is cheap to generate, preserving it at all costs becomes irrational. Yet we still need systems that are reliable, secure, comprehensible, and long-lived. The solution is not to fight ephemerality but to design around it.
This means shifting what we treat as permanent.
Not implementations, but interfaces
Not code, but behavior
Not files, but evaluations
Not ownership, but stewardship
In other words: the system is the asset. Code is just a consumable input.
From Maintenance to Regeneration
Most software practices today are optimization strategies for a world where code is expensive to write and dangerous to replace. We edit files in place. We fear rewrites. We celebrate longevity of codebases as a proxy for quality.
AI changes the cost curve so dramatically that these habits start to look like technical debt generators.
Instead of maintaining code, we can regenerate it.
Instead of upgrading in place, we can replace.
Instead of debugging line by line, we can select between competing implementations.
Instead of trusting authorship, we can trust evaluation.
This is not a call for recklessness. It’s a call for discipline—a different kind than we’re used to.
The Phoenix Architecture
The metaphor I keep returning to is the phoenix: systems designed to burn and be reborn, continuously, without losing their identity.
A regenerative system has a few defining traits:
Clear, durable boundaries that outlive any implementation
Tests and evaluations that define correctness independently of code
Automation that assumes replacement is normal, not exceptional
Explicit acceptance that code will rot, drift, or become incomprehensible
Cultural comfort with deletion, rewriting, and starting over
In such systems, failure is localized, recovery is fast, and improvement emerges through iteration rather than preservation.
The goal is not immortality of code.
The goal is immortality of intent.
Why This Matters Now
Right now, many teams are using generative AI as a productivity multiplier inside old mental models. Faster coding. Bigger diffs. More surface area. Same assumptions.
That works—briefly.
But it also accelerates entropy. Larger codebases. Lower comprehension. More fragile systems. Higher cognitive load per developer. The very speed that AI enables becomes a liability.
We need architectures, workflows, and cultures that treat AI not as a faster typist, but as a fundamentally different substrate for building systems.
That requires rethinking:
What “quality” means when code can be rewritten tomorrow
How we test systems whose implementations are transient
How we organize teams when one person can ship what used to require many
How we manage cost when context windows and tokens replace headcount
How we remain accountable when authorship dissolves
These are not tooling questions. They are architectural and philosophical ones.
What This Blog Will Explore
This will be a slow, cumulative exploration—one post at a time.
Topics I plan to cover include:
Pace layers and why different parts of systems should regenerate at different rates
The idea of n=1 development and what it means for teams and organizations
Evaluations as the true source code
Why rewriting beats refactoring in the AI era
How to design interfaces that survive constant replacement
The economics of code as cost, not capital
Cultural shifts required to celebrate deletion instead of preservation
Patterns for building systems that expect to be rewritten
Some posts will be conceptual. Some will be practical. Some will be uncomfortable.
That’s intentional.
An Invitation
This is not about hype, nor about dismissing decades of hard-won engineering wisdom. Many of the ideas here build directly on them—immutable infrastructure, test-first design, separation of concerns, automation, evolutionary architectures.
What’s changing is the environment.
When the substrate shifts, the architecture must follow.
If you’re building software with AI—or expect to be—you’re already living inside this transition, whether you’ve named it or not. My goal here is to help articulate a vocabulary, a set of patterns, and a philosophy that make the transition legible.
Software that lasts will not be frozen in amber.
It will be continuously reborn.
Welcome to Regenerative Software.