The industry is still trying to generate applications.
A React app. A Django service. A Rails API. A FastAPI backend.
That instinct made sense when writing software was the expensive part. But in a world where code can be generated quickly and cheaply, the real constraint has shifted. The problem is no longer producing code. The problem is replacing it safely.
Regenerative software does not work if the unit of generation is an application. Regeneration only works if the unit of generation is a component that compiles into a system architecture.
Languages and frameworks matter, but they are implementation choices. Architecture is the thing that actually determines whether a system can evolve without breaking itself.
A System That Already Worked
Years ago at Wunderlist we ended up with a system structure that, in hindsight, explains why replacing pieces of the system was easier than it should have been.
The architecture settled into several layers: clients across multiple platforms, a WebSocket actor server acting as a synchronization proxy, a REST domain layer containing business logic, a data service layer in which each service owned its datastore, and a mutation bus that propagated changes through the system.
The frameworks we used inside those services varied over time. But the structure of the system stayed consistent.
Each layer had clear responsibilities. Every datastore had a single owner. Mutations moved through predictable paths instead of appearing unpredictably across the codebase.
Once that structure existed, every new component had to fit within it.
Services could be rewritten. Implementations could change. But the structure of the system remained stable.
That stability is what made replacement safe.
Frameworks Hide the Real Problem
Frameworks are valuable because they provide structure. They define directory layouts, lifecycle hooks, routing conventions, and dependency patterns. Those conventions help teams move faster and reduce accidental complexity.
The problem is that frameworks provide just enough structure to make teams feel like they have architecture.
But frameworks do not answer the questions that actually determine whether a system can evolve safely: how components communicate, who owns data mutation, how behavior is verified, how parts of the system can be replaced, and where the natural boundaries between components should live.
Those are architectural decisions.
If they remain implicit, regeneration becomes dangerous. Two services may share a database table and both work perfectly in production. But neither of them can be replaced safely. The moment one changes, invisible coupling emerges.
The same thing happens with ad-hoc HTTP endpoints. They often look clean at first glance. But they quietly embed assumptions about state, ordering, and behavior that are nowhere captured in the architecture.
Ad-hoc HTTP endpoints are not an architecture. They are simply networked function calls.
The Missing Compilation Target
Most development pipelines still follow the same conceptual structure:
spec → code → framework → deployable application
AI tools accelerate the middle step. They produce code faster. They scaffold projects more efficiently. But the overall structure remains unchanged.
Regenerative systems require a different model:
spec → architecture → regenerable components → implementations
In this model the architecture becomes the compilation target. Code becomes a build artifact produced to satisfy architectural constraints.
Instruction Sets for Systems
A useful analogy comes from the way compilers interact with hardware.
Modern programming languages rarely compile directly to processors. Instead they compile to an instruction set architecture (ISA).
A C program can compile to ARM. So can Rust. So can Swift. The languages differ, but they all target the same architecture.
The ISA defines the rules that make execution possible: how functions are called, how memory is addressed, and how instructions interact with the machine.
Because those rules are stable, implementations can evolve while the underlying machine remains usable.
Regenerative software needs a similar layer.
A system architecture defines the rules every component must satisfy. Without those rules, components are just programs running next to each other. With them, components become replaceable parts of a larger system.
The Constraints That Make Replacement Safe
A regenerative architecture works because it imposes constraints that make replacement possible.
One of those constraints concerns how components communicate. Systems tend to settle around a small number of interaction models—RPC calls, events, actor messages, command buses, or mutation logs. The specific transport is less important than the consistency. When interaction patterns are predictable across the system, replacing a component does not require rediscovering hidden assumptions about how messages flow.
Another constraint concerns data ownership. Regeneration requires clear authority over state. If multiple components mutate the same data directly, none of them can be replaced safely. A regenerative system therefore assigns exclusive mutation authority for each dataset to a single component. Other parts of the system interact with that data through well-defined interfaces. Variations of this rule appear repeatedly in resilient designs—from service-oriented architectures to event-sourced systems and data mesh. Data ownership ends up defining architectural seams far more reliably than file boundaries ever will.
A third constraint determines the natural grain of components. When components grow too large, regeneration becomes risky because deleting or rewriting them introduces widespread uncertainty. But making components too small produces a different problem: coordination overhead. Context scatters across too many fragments, and network chatter begins to dominate behavior.
The right grain usually reveals itself through two forces. One is data ownership boundaries, which define where mutation authority lives. The other is evaluation surfaces—places where behavior can be verified independently. When a unit both owns its mutation authority and can be verified without booting the entire system, a natural seam appears. Architecture’s job is to stabilize those seams.
Finally, regeneration depends on the presence of clear evaluation surfaces. Behavior must be verifiable independently of implementation. Systems accomplish this through mechanisms such as API contract tests, domain invariants, event consistency rules, and property tests that assert system behavior. Without these surfaces, regenerated components cannot be trusted. Verification becomes guesswork.
What Made the Wunderlist System Work
Looking back, the Wunderlist architecture satisfied all of these constraints.
The mutation bus established predictable calling conventions for how changes flowed through the system. Each data service held exclusive authority over its datastore, ensuring that mutation ownership was clear. Services were sized around domain responsibilities, which produced a workable grain for independent evolution. And contract and integration tests provided evaluation surfaces that verified behavior at the boundaries.
None of those properties came from frameworks.
They came from architectural constraints.
Once those constraints existed, implementations became flexible. The system could evolve without collapsing under its own complexity.
Architecture as Runtime
In a regenerative system, architecture begins to resemble a kind of runtime environment—not in the sense that it executes code directly, but in the sense that it actively constrains how components exist within the system.
The architecture defines how components communicate, how state may be mutated, where correctness is evaluated, and how components can be replaced. Components plug into this environment. Their implementations can change, but the structure that governs their interaction remains stable.
Just as many languages can target the same CPU architecture, many implementations can target the same system architecture.
That stability is what makes regeneration possible.
The Shift
For decades we learned an important lesson about machines.
Software does not compile directly to hardware. It compiles to an instruction set that makes hardware replaceable.
Regenerative systems need the same insight.
Software should not compile directly to frameworks.
It should compile to architecture.
Not applications.
Not templates.
Systems.