Aaron Mills

“Technical Debt” is a euphemism for Incompetence.

Some will take offense to this, likely because they've employed the term so much in their career that they've become married to it. Their egos are standing firmly on the fuzzy idea it represents; to pull the shag carpet away would prove too disastrous.

The good news for the rest of us is that Incompetence, when looked at squarely and calmly, can be overcome.

I've never heard a Master invoke “Technical Debt”.

Where there is no structural flaw in your technology architecture or code, there is no “Technical Debt.” Master practitioners build technology without structural flaw. Therefore, “Technical Debt” is a term for non-Masters who have no Master to consult.

How do the Masters build technology without structural flaw? They have, through a gauntlet of experience (and, importantly, the right value system), learned how to build have become one who builds their systems from decoupled parts.

We are all imperfect, including the Masters. But the Masters, using their hard-earned one-trick pony, Decoupling, factor the consequences of their imperfections & ignorances apart, draining the risk that any imperfect component or false assumption could bring to their system.¹²

¹ Where there is no constant tax, there is no “Technical Debt”: a flawed component, when properly factored cannot continue to charge you. You simply delete it or replace it. ² This works well, because while we are all imperfect, no one is perfectly imperfect all the time.

“Technical Debt” is orthogonal to flexibility.

Some solutions are heavily generalized, highly flexible, often over-engineered, sometimes not. Other solutions are specific and concrete, hard-coded. The cost (time) tends to be necessarily more for the generalized solution. But “Technical Debt” is orthogonal to this time-sensitive dimension.

Time and “Technical Debt”

Mastery is a long journey: decoupling is a muscle skill; you cannot transfer it by book or online tutorial. It takes years of practice and experience to develop. You have to practice correctly. You must hold the racket the right way. Otherwise you will only become proficient in bad habits.

The necessary time is in the personal growth: the Master knows that there is no necessary tradeoff to be made while steering the ship: between some amorphous “Technical Debt” and time-to-market. For the Master, optimal speed and decoupled technology go hand in hand; the muscle skill is there; for the Master it takes longer to couple.

“Technical Debt” never gets paid off anyway.

“Technical Debt” is the existence of coupling: non-minimal dependencies. The practitioner, by invoking “Technical Debt”, implies that s/he could have created less dependencies had s/he had more time. This is akin to saying I didn't know how to create less dependencies.

This is a failure of competence and will repeat itself the very next week. Dependencies will stack upon dependencies. Last week's “Technical Debt” will be buried under more “Technical Debt”.

The compound interest here is untameable, it always runs amok: this is not debt, it's a life sentence.

Masters are not Masters.

A Master is a Master only if she grows. But a Master that thinks she's a Master does not grow: she has already reached her pinnacle. The fault of the failures around her lie elsewhere. Likely, as “Technical Debt”.

There are few Masters.

There are so few Masters. There could be more. “Technical Debt” needs to be seen for what it is. Incompetence, overcomeable Incompetence. Then the journey to Mastery can begin, and continue.

This should be self-evident: any endorsement of a process or tool, whether in life or in software, should have the burden of explaining its ground purpose.

This purpose could be anything but without making it explicit we cannot say we are behaving rationally. We have no basis for our hopeful outcomes. Without explicit purpose, we're just talking air.

Eat cheese.

Here is a recommendation with no purpose.

Eat cheese because it tastes good.

Now we have a reason, now we can have confidence in our behavior.

When it comes to software construction, the only worthy purpose is throughput. Throughput now and tomorrow.

Another word for this is productivity.

I code too much. I love to code. In my off hours, I play with logic systems and UI frameworks and data platforms. I cut my fingers. I get my hands dirty. This play is fun and edifying.

Fun is a good purpose. Learning is a worthy purpose.

But we're talking about software construction. And this is not that.

That is play. Software construction is work.

We must be very clear about the distinction.

What about correctness? Aren't throughput and correctness at odds? Shouldn't correctness be a purpose?

No. You are muddying the waters.

Incorrectness is feature incompleteness.

If your feature or your system is insufficiently correct, you have more work to do. It's that simple.

This is not mere semantics, by the way. Our perspective is skewed if we muddy our domain and our plan of attack will be confused.

So our problem, the real problem, the root problem, is throughput. How can we optimize our productivity? How can we deliver sufficiently correct capability at the highest rate?

In software construction this is the sole unit of optimization that should matter.

Any tool or process we endorse or espouse is valuable only inasmuch as it increases our productivity.

This is not a trivial optimization problem. But it's also not rocket science.

Whole-hog static typing¹?

—It helps correctness!

Nope, you're muddying things, try again. How will static typing make me more productive?

—Okay, fine, how's this: static typing slows things down today, yes, but speeds things up tomorrow—during, for example, next week's refactoring.

Great, you've got it now. Of course we have to make sure that the tomorrow's speedup you're talking about is greater than today's slowdown—amortized over time and all of that. And, furthermore, we have to ensure that this is the most optimal way to achieve next week's refactoring.

But at least we're speaking a shared language now. At least we're optimizing for the right thing.

Every advocate for this or that software tool or language or process should be held to this standard.

How are you going to increase my throughput?


Footnotes

  1. Whole-hog static typing is the form static typing most commonly takes. Most if not all mainstream statically typed languages do not offer first-class opt-out/opt-in to the type system. They force you to leverage the type prover across the board or use clumsy, second-class syntax to opt out of it.

The master programmer continuously partitions code into that which is stable and that which is volatile.

Volatile components stay at the top where dependents are minimized.

So the master programmer creates a software component architecture that look like this (directed edges are dependencies¹):

An Ideal Architecture

It may take some time to find the stable parts, to carve them out from the volatile.

So the master waits for the stable parts to prove themselves. She lets volatile code accumulate. At the top, they grow a little:

An Optimal Architecture

But she is vigilant. In short order she will discover the stable parts and factor them out, shrinking the volatile components back to healthy size:

An Optimal Architecture Iterated

Her goal is to keep the volatile parts minimized. This is the part that costs.

The non-master cannot do this. Unable, she makes one of two mistakes.

The first mistake is calling stability where it does not exist by creating a volatile dependency. This is known as premature abstraction.

The choice is devastating to productivity. All dependents are infected and volatility and cost are increased all around:

An Optimal Architecture Iterated

The other mistake is more forgivable. This programmer understands her ineptitude. She knows she doesn't understand the game and so refuses to play. She simply lets volatility increase at the top:

An Optimal Architecture Iterated

But for too long. And the end result is the same: too much volatility, too much cost.

The natural gravity is toward chaos. To volatility.

Without mastery, without vigilance, the progression is expansion.

The center becomes increasingly difficult to hold. The event horizon is always reached much sooner than expected:

The Event Horizon

The event horizon is the point of no return. All that can be done is to start over. This is called The Rewrite. The Rewrite is horribly expensive, often fatal.

Sadly, this is the industry norm.

Why?

Because there are so few masters.

And why is that?

Because mastery comes by long, strenuous effort. One needs to fail and try and fail again—at separating stability from volatility in this way that we are talking about.

It is an earned skill requiring much focused and repeated practice, and much failure.

It's like swinging a tennis racket. It's not enough to know the form, you have to internalize it. The difference between mere knowing and ability are miles apart.

Very, very few go through the regimen.


Footnotes

  1. This diagram and our story applies to every level of your system composition: to functions within source files, to files or classes within packages, to packages within modules, to modules within sub-systems, and to sub-systems within systems.