Not all software should change at the same speed.

This has always been true, but it's easy to forget when tools make change frictionless. Generative AI dramatically lowers the cost of modification, which creates a dangerous illusion: that everything can change quickly, therefore everything should.

That's how systems accumulate the kind of damage that only becomes visible in production, at 2am, when the person who understood the original design left two years ago.

To build durable software in the AI era, we need a way to reason about where change belongs and where it doesn't. Pace layers give us that lens.

Pace Layers, Briefly

The idea comes from Stewart Brand's work on long-lived systems. In any complex system—cities, organizations, civilizations—different layers evolve at different rates:

  • Fast layers experiment

  • Slow layers stabilize

  • Tension between them is healthy

  • Confusing them is destructive

Software systems are no different. They just forgot this fact during decades of abstraction and refactoring.

AI is reminding us, sometimes painfully.

Where AI Thrives

Generative AI excels in environments with three properties:

  • High change frequency — the layer already expects regular modification

  • Low blast radius — failures are contained and recoverable

  • Verifiable outcomes — you can tell whether the output is correct

That third property deserves attention. "Verifiable" doesn't mean trivial to evaluate—it means the feedback loop closes. A UI component either renders correctly or it doesn't. A data transformation either produces the expected output or it doesn't. The verification might require tests, visual inspection, or user feedback, but there's a path to knowing.

These properties tend to cluster at the top of software systems:

  • UI components

  • Presentation logic

  • Content generation

  • Workflow glue

  • One-off integrations

These layers benefit from rapid regeneration. Daily rewrites are not only acceptable—they're often desirable. Fresh code adapts faster to shifting requirements, libraries, and user expectations.

Here, disposability is a feature.

Trying to "harden" these layers prematurely wastes effort and slows learning. AI should move fast where the cost of being wrong is low and the cost of being slow is high.

Where AI Struggles

At the bottom of systems, the rules change.

  • Infrastructure

  • Protocols

  • Data models

  • Security boundaries

  • Governance logic

These layers change slowly because mistakes are expensive and recovery is hard. The feedback loops are longer—sometimes months or years before a design flaw surfaces. Verification is difficult because correctness often depends on properties that only emerge under load, over time, or at the edges of the input space.

AI can help here, but only under strict constraints: human review, formal verification, extensive property testing, staged rollouts.

Blind regeneration at deep layers is reckless. The failure modes are subtle, compounding, and often invisible until too late.

The mistake many teams make is applying AI uniformly—letting fast-layer tools leak into slow-layer responsibilities.

That's not acceleration. It's erosion.

The Hard Problem: Finding the Layers

Here's what the clean diagrams don't show: figuring out which layer something belongs in is where most of the intellectual work happens.

Your authentication system—is it infrastructure or application logic? Your feature flag service—fast layer or slow? The ML model that powers recommendations—how often should it regenerate, and what happens when the new version behaves differently from the old?

There's no universal answer. Layer placement depends on your specific system's failure modes, your team's capacity for review, and your users' tolerance for inconsistency.

A few heuristics help:

  • Follow the blast radius. If changing this component could break things you don't own, it's slower than you think.

  • Follow the recovery time. If fixing a mistake takes days instead of minutes, the layer is deeper than it appears.

  • Follow the dependencies. If many things depend on this and few things it depends on, you're looking at infrastructure whether you named it that or not.

The exercise of layer identification is itself valuable. Teams that argue about where boundaries belong are teams that understand their system's actual structure—not just its intended structure.

AI Reveals False Layers

Here's a harder truth: AI-assisted regeneration will expose layers that were never real.

What teams call "core infrastructure" is often just code that's hard to change because it's poorly factored, not because it's genuinely foundational. The difficulty of modification got confused with importance.

When AI makes modification cheap, these false bottoms become visible. You discover that the "critical" service everyone was afraid to touch was actually a tangle of accidental complexity that a fresh implementation handles in a tenth of the code.

This is both opportunity and danger.

The opportunity: you can finally replace calcified code that was only preserved by fear.

The danger: you might mistake actual foundational code for the merely calcified kind. The difference is whether the complexity is essential or accidental—and that distinction requires judgment that AI doesn't have.

Pace layer thinking helps here. Ask: if we regenerated this component, what invariants must the new version preserve? If the answer is "we're not sure," you've found a slow layer masquerading as a fast one. (Or you've found something that should be a slow layer but its interfaces are poorly defined. More on that in a future post.)

The Gradient of Disposability

Between the fastest and slowest layers is a gradient.

Some code should be rewritten daily. Some monthly. Some yearly. Some almost never.

The key insight: regeneration frequency should match layer pace.

When regeneration outpaces a layer's ability to absorb change, instability increases. When it lags, entropy accumulates. The art is alignment.

AI doesn't remove this gradient. It makes ignoring it more dangerous—because now you can regenerate fast enough to outrun your own understanding.

Layer Separation Is an Architectural Act

Pace layers are not conceptual abstractions. They must be encoded into the architecture.

This means:

  • Clear boundaries between layers, enforced by module structure, not just convention

  • Explicit interfaces that slow layers expose to fast ones

  • Tests that enforce contracts across regeneration cycles

  • Deployment pipelines that move at different speeds for different layers

When layers are blurred, AI accelerates the wrong things. When layers are explicit, AI becomes a force multiplier rather than a destabilizer.

This is why "clean architecture" suddenly matters again—not as dogma, but as survival strategy.

A Realistic Case Study

Consider an e-commerce system with these components:

  • Product catalog UI — displays products, handles search, shows recommendations

  • Pricing engine — calculates prices, applies discounts, handles currency conversion

  • Inventory service — tracks stock levels, manages reservations, coordinates with warehouses

  • Order ledger — records transactions, maintains audit trail, handles compliance

The catalog UI regenerates aggressively. AI rewrites components weekly based on A/B test results and design iterations. Failures are visible immediately and recoverable by rollback. The blast radius is one user's session.

The pricing engine regenerates monthly, with extensive property-based testing. Every regeneration must preserve invariants: a discount can't increase the price, currency conversion must be reversible within tolerance, promotional rules must compose correctly. AI proposes changes; humans verify the invariant preservation.

The inventory service regenerates quarterly at most. Coordination bugs create real-world problems—oversold products, angry customers, warehouse confusion. Changes go through staged rollouts with manual checkpoints. AI helps with implementation but doesn't drive the regeneration schedule.

The order ledger almost never regenerates. It's the system of record. Compliance requirements dictate its structure. Changes require legal review, audit trail preservation proofs, and migration plans that span months. AI might help write the migration scripts, but a human architects every change.

Now here's where it gets messy:

The recommendation model that powers the catalog UI—where does it live? It affects what users see (fast layer concern) but it's trained on historical order data (slow layer dependency). The team decides: the model itself regenerates fast, but it can only read from a stable snapshot of order data that updates weekly. The boundary is explicit.

The feature flag system that controls pricing experiments—fast or slow? It changes frequently (new experiments daily) but a bug could apply wrong prices to real orders (high blast radius). The team decides: the flag evaluation logic is slow layer, heavily tested, rarely changed. The flag configuration is fast layer, AI-assisted, easy to roll back.

These boundary decisions are where the real architectural work happens. The layers aren't given. They're chosen.

Designing for Productive Tension

Healthy systems preserve tension between fast and slow layers.

Fast layers want freedom to experiment. Slow layers want stability to build on.

AI strengthens both impulses. It makes experimentation cheaper and makes stability violations more consequential. The job of architecture is not to resolve this tension but to channel it.

When fast layers are over-constrained by slow ones, innovation dies. Every UI change requires a committee.

When slow layers are eroded by fast ones, trust dies. The system becomes a house of cards that looks fine until it doesn't.

Pace layers are how you keep both alive: clear boundaries that let each layer move at its natural speed without destabilizing its neighbors.

What Comes Next

In posts that follow, I'll explore how pace layers shape evaluation strategies (how do you verify regenerated code at different layer speeds?) and the emerging pattern of n=1 development (what happens when AI makes bespoke software economically viable?).

But the core idea starts here:

AI doesn't flatten software. It sharpens its layers.

Build with that in mind, and regeneration becomes a source of durability—not decay.