As I mentioned in a previous post, at Wunderlist, we had a rule: any new service had to be "this big", a constraint I'd demonstrate by holding my fingers a few inches apart. The metric wasn't about lines of code. It was about replaceability.

If a service was small enough to rewrite in a day, it couldn't accumulate the kind of complexity that makes systems brittle. That rule was about resisting growth. Not preventing change but resisting mass.

Every software system naturally grows. When change is easy and addition is cheap, structure accumulates unless something pushes back. For most of software history, that counterforce was human effort. Writing code was slow. Adding complexity hurt. Growth had friction.

Generative AI removes that friction.

Without an opposing discipline, AI doesn't just accelerate development. It accelerates bloat. This post is about the discipline that prevents success from turning into system weight.


Accumulation Is the Default Failure Mode

In AI-accelerated systems, expansion is the path of least resistance. Generation is cheap. Preservation is emotionally easy. Deletion requires justification. Think about how many times you've seen commented out code in a legacy code base where someone couldn't bring themselves to outright delete it even though it's not used anymore. That's the psychology we're dealing with here.

Modern LLM-driven workflows strongly favor addition: new features appear instantly, glue code materializes, abstractions proliferate because the model has seen them before. Edge cases get special handling instead of root-cause fixes. "Temporary" code survives because it works.

None of this requires bad engineers. It barely requires engineers at all.

If you do nothing, your system will grow until it becomes unmanageable. This was true before AI, but the timeline has collapsed. What used to take years of drift now happens in months of "high-velocity" shipping.


Conceptual Mass

Lines of code are a distraction. What actually matters is conceptual mass—the weight of ideas a system asks you to hold in your head.

Conceptual mass is the sum of distinct concepts, invariants, public interfaces, dependencies, and exception paths. It is the number of things a human, or an AI, must understand to make a safe change.

AI is exceptionally good at increasing conceptual mass silently. Every generated abstraction, every "clean" separation of concerns, every helper function adds weight. The code passes the linter. The tests pass. The system gets heavier.

The Compaction Discipline exists to reduce conceptual mass relentlessly.


Compaction Is Not Cleanup

Most teams think about size reduction as hygiene: occasional refactors, technical-debt sprints, cleanup tickets that sit in the backlog, but that framing is wrong.

In theory, refactoring can reduce conceptual mass. In practice, it rarely does. Most refactoring reorganizes existing structure without challenging whether that structure should exist at all.

Refactoring is reorganizing the closet.

Compaction is realizing you don't need the closet.

Compaction is not maintenance. It is structural pressure. It is the deliberate, continuous application of force to keep a system's conceptual mass proportional to its purpose.

If your system gets more complex every time it gets more capable, you are losing.


What Compaction Looks Like

Removing code often accompanies compaction, but deletion is incidental. The goal is not fewer lines. The goal is less surface area.

AI loves to hallucinate architecture. It will suggest a Strategy pattern, a Factory, and an Interface for a feature that could be a single if statement.

Expansion is keeping those files because "it's best practice."

Compaction is deleting them because the distinction doesn't pay rent.

Successful compaction looks like fewer abstractions doing more work. Collapsed layers. Eliminated special cases. Simpler dependency graphs. Clearer boundaries. Smaller interfaces.

Code disappears because it no longer earns its keep. Sometimes the code stays, but the conceptual mass drops, because two ideas become one and the mental model shrinks.

The question is not "can we delete this?" It's "does this concept justify its existence?"


Architecture as Compaction

At Wunderlist, we built what people would now call a microservices architecture, but we thought of it as a deliberately dumb architecture.

The industry focuses too much on "microservices" and not enough on "architecture." That's why microservices get a bad rap. Our system worked because it was simple to the point of boredom.

We organized around nouns, not verbs. Users, lists, tasks, comments, each owned by exactly one service. Operations were almost entirely CRUD. Communication happened through exactly two mechanisms: a standardized REST/JSON convention that every service spoke natively and exclusively, and a message bus that broadcast every mutation. That was it. No service-to-service RPC. No custom protocols. No internal APIs that only two services knew about.

We didn't choose this approach because we loved distributed systems. We chose it because it enforced replaceability. When a service became too heavy—too much conceptual mass—we didn't refactor it. We deleted it and replaced it with something simpler. Or faster. Or cheaper to run. Because the architecture was dumb, rewriting was cheaper than preserving complexity.

The architecture gave everything exactly one place to go. Duplication was obvious. Special cases had nowhere to hide.

The specifics don't matter. The constraint does. You don't need microservices to do this. You can practice compaction in a monolith by enforcing modular boundaries that are ruthless about dependency direction and ownership. The technology is incidental (though in my own expereince, separation by process boundary makes the modularity more explicit). What matters is designing systems where bloat has no natural home.


Optionality

Compaction buys you more than cleanliness. It buys you options.

A compact system is cheaper to regenerate. It fits inside bounded reasoning contexts. It adapts to new languages and frameworks because there's less to port. It is easier to audit. It has a smaller blast radius when it fails.

This is why the most durable legacy systems are often boring. They didn't grow clever. They resisted the urge to solve tomorrow's problems today.


The Discipline, Stated Plainly

Any system that does not actively compress will inevitably bloat. AI does not change this law. It just accelerates it.

We are moving from an era where code seemed like an asset to an era where code is more clearly a liability, and only functionality (and arguably its architecture) is the asset.

The Compaction Discipline is the counterforce: continuous structural pressure to keep conceptual mass proportional to purpose.

Generation is cheap. Compression is leverage.