Thirty source files organized into layers. Controller, service, repository, data mapper. That's a typical microservice, and every one of those layers exists so a developer can navigate. You put the database logic in one place and the HTTP handling in another because a person needs to hold the shape of the system in their head. The file is a cognitive container. The module boundary reflects what a developer can reason about in a single sitting, not some deeper truth about the problem domain.
When code is regenerated rather than maintained, those constraints lose their force. A regenerative system doesn't need navigation. It needs a specification precise enough to produce correct behavior. Those 30 files might be regenerated from a single behavioral spec, structured completely differently each time. The file layout is an output, not an input.
This goes deeper than files. Branches, pull requests, code review: these are rituals organized around the assumption that a human authored each change and another human needs to verify it. They're valuable rituals. But they're contingent on a model of software production that's shifting underneath us.
The file stops being a primitive. The specification becomes one.
The architecture of a regenerative system is defined entirely by what you can't delete.
That's the claim this essay is built around. Every architecture has primitives: irreducible building blocks you compose everything else from. Ours were shaped by decades of humans writing code by hand. When implementation becomes disposable, a different set of primitives emerges. The interesting question isn't "what can AI generate?" but rather what are the critical artifacts in a system designed to be continuously reborn?
The Primitives
Start with something concrete. A payment processing module in a traditional system is defined by its source code, its test suite, and its git history. Delete those, and the module is gone.
In a regenerative architecture, the same module is defined by four different artifacts. Its behavioral specification: "charge a card, handle declines, emit events with these schemas." Its evaluations: runnable contracts that any implementation must pass. Its context boundary: the API contract and event schemas neighboring services depend on. Its provenance record: "this module was regenerated Tuesday because the fraud detection spec changed."
Delete the implementation, keep those four artifacts, and you can regenerate a working system. Delete any one of the four, and you can't.
That's what makes them primitives. Specification, evaluation, context boundary, and provenance are the minimum set from which everything else can be derived. Together, a specification and its evaluations define what I call a regenerative grain: the natural unit you can safely delete and recreate. A context boundary defines where one grain ends and another begins. Provenance gives you the audit trail that git log used to provide, but at the level of intent rather than diff.
Specifications Are Not Documentation
This is the hardest conceptual shift, and most teams get it wrong on the first try.
In a regenerative system, the specification isn't a description of code that exists. It's the source of truth from which code is derived. This inverts the traditional relationship where documentation describes implementation. Here, implementation expresses specification.
Think about an API spec today. It lives in a Swagger file that developers keep "in sync" with the real code. It drifts constantly. Everyone knows it drifts. Nobody fixes it until a customer complains. The spec is decorative: a polite fiction maintained for onboarding and external consumers.
In a regenerative system, that specification is the generative input. If the spec says the endpoint returns a 404 on missing resources, the regenerated code does that. Not because a developer remembered to, but because the evaluation derived from the spec enforces it. The spec can't drift from the implementation because the implementation is derived from the spec on every regeneration cycle. Contract-first development taken to its logical conclusion.
The tradeoff is real: writing good specifications is harder than writing good documentation. Documentation can be vague and still useful. A specification that's vague produces implementations that are wrong in unpredictable ways. The discipline required is closer to writing a contract than writing a README.
Context Boundaries Are the New Architecture
If interiors are disposable, boundaries are everything.
The most consequential design decisions in a regenerative system aren't about how code is structured internally. Internal structure is regenerated and therefore fluid. The decisions that matter are where you draw context boundaries: the integration contracts, event schemas, and shared data formats that multiple regeneration units depend on. These boundaries form what I think of as the conservation layer: the part of the system that resists change because changing it means coordinating across multiple independent regeneration cycles.
Two teams share a protobuf schema for inter-service communication. The internal implementation of each service can be regenerated freely. Different languages, different frameworks, different internal structure every time. But the schema itself is a conservation-layer artifact. Changing it requires coordinating both teams, both regeneration cycles, both evaluation suites. That schema is more architecturally significant than any line of code in either service, because it's the one thing that can't be regenerated in isolation.
This shifts what architectural skill means. It moves from "how do I structure this service internally" to "where do I draw the lines between things that regenerate independently." Get the boundaries wrong and you've coupled regeneration units that should be independent. Now changing one means regenerating three, and you've lost the whole point. Get them right and each unit can be reborn on its own schedule without coordination.
Boundary design requires more upfront thought than internal structure ever did. You can't refactor a context boundary the way you refactor a class. Every service on the other side of that boundary has baked your contract into its evaluations. The conservation layer resists change at the points where change is most expensive. The skill is knowing which boundaries to draw tight and which to leave loose, and that judgment still comes from experience, not from any specification language.
Provenance Replaces Narrative
Traditional version control tells a story: who changed what, when, and (if you're lucky) why. In a regenerative system, most of that story is meaningless. The code was generated, not authored line by line. A diff between two generated implementations tells you almost nothing about what actually changed in the system's behavior or intent.
But the need for traceability doesn't disappear. It shifts.
Provenance tracks which specification version produced which implementation, which evaluation suite validated it, and what triggered the regeneration. This is richer than a commit log because it captures causation, not sequence.
Picture an incident investigation. In a traditional system, you're reading git blame and commit messages, trying to reconstruct what a developer was thinking three weeks ago at 11 PM. In a regenerative system, provenance tells you: "This implementation was generated from specification v2.3, validated against evaluation suite v2.1. Mismatch. The eval suite hadn't been updated for the new spec. Triggered by a dependency update in the auth module." That's a causal chain. It points you at the root cause directly: the evaluation suite was stale, so a spec change slipped through without proper validation.
The gap today is tooling. Domain-specific provenance systems exist: MLflow tracks model lineage, SLSA addresses build provenance in supply chains. But nobody has built the general-purpose provenance layer for regenerative software. Teams building these systems now are stitching together metadata stores, generation logs, and spec version tags. It works. It's also baling wire.
We spent decades refining primitives that optimize for human authorship: readability, navigability, reviewability. Those aren't wrong. They're contingent on a world where humans write and maintain every line. When implementation becomes disposable, the primitives that matter are the ones that survive deletion: the specification, the evaluation, the boundary, the provenance record.
The developers who master this won't be the ones who generate code fastest. They'll be the ones who draw the best boundaries.