When we talk about software economics in the age of generative AI, it can feel like we’re inventing something entirely new. But the truth is that many of the economic pressures now simply reveal dynamics that have always existed beneath the surface of software development.
The Myth of Code as Capital
For decades, most engineering cultures treated code like a durable asset. The prevailing mindset was:
Code should be long-lived
“Technical debt” must be paid down
Rewrite is failure
Maintenance is virtue
This made sense when the dominant cost of software was writing it: hiring, training, manual coding, test cycles, and team coordination absorbed most of the budget. But that interpretation created a myth: that code itself is valuable capital.
It isn’t.
Legacy systems — codebases decades old still running mission-critical functions — are often costly only because they are expensive to understand and maintain, not because their lines of code are inherently valuable. According to industry definitions, a legacy system is code that continues to serve a purpose but has become burdensome to evolve because of outdated technology or missing automated tests and documentation.
Even the term “legacy code” in software engineering — code without tests — implies maintenance risk, not long-term capital value.
Systems Already Reveal the Hidden Economics
Look at how successful evolutionary modernization of legacy systems happens in practice. Thoughtworks and other practitioners favor incremental, evolutionary strategies over “big bang” rewrites because they reduce risk and cost.
What do these approaches have in common?
They minimize the amount of code kept while surface area grows
They replace functionality in iterations
They rely on incremental modernization patterns
These are not new economic insights. They’re how humans have long coped with complexity when preserving old code becomes more expensive than replacing it.
Even before AI, many approaches (including my own) to evolutionary architecture were solidified as responses to the high cost of maintaining monolithic legacy codebases. They accept that restating, reshaping, and replacing parts of systems can be cheaper than preserving and patching old ones.
Pace Layers: How Software Already Had Multiple Cost Regimes
As discussed in a previous post, a useful lens for understanding why not all code should be treated the same is pace layering — a model first articulated by Stewart Brand to explain how complex systems adapt and endure.
In Brand’s original framing, different layers of a system evolve at different speeds:
Fast layers innovate and experiment
Slow layers stabilize and provide continuity
The tension between them produces resilience
Applied to software, this predicts that some parts of a system ought to change rapidly, and others slowly — because the economic cost and impact of change differ by layer.
This insight was true long before AI. In practice:
UI frameworks cycle quickly
Core business rules change occasionally
Deep infrastructure and protocol logic rarely change
Traditional engineering already valued some code as more durable because its replacement was expensive. This matches Brand’s argument that “fast learns, slow remembers”. Fast layers respond quickly to shocks while slow layers retain memory and continuity.
AI Reveals What We Already Ignored
Generative AI collapses the cost of producing code, but not the cost of understanding it. Writing is cheap; comprehension is expensive. This exposes a core truth that legacy practitioners already knew instinctively:
Software isn’t valuable because it exists and serves a purpose. its value also lies in the requirement that we can reason about it, evolve it safely, and trust its behavior.
That’s why legacy systems become expensive: the burden of understanding and maintaining code outweighs its utility. Traditional approaches like software archaeology — reverse-engineering undocumented code — are symptomatic of organizations trying to carry cost forward because retiring code was harder than preserving it.
AI accelerates this pressure.
Why This Matters Now
Today, as tools can generate massive amounts of code with little human effort, the economic question is no longer "How do we write code efficiently?" It's "How do we reduce the long-term cost of what we write?"
That question demands we rethink what we treat as persistent. Underneath the shiny façade of AI productivity, the real economic driver will be systems that minimize the cost of comprehension, evaluation, and replacement.
More on this in the following posts.