Beyond Feature/Bug Dualism

I don’t trust computers. I work with them, I use them, I program them, and I can generally understand how they operate. But I never trust them fully, no matter how much I’m supposed to.

These days, more and more, I find I’m expected to trust computers — at least to some degree. All the marketing says this, and the product design backs it up. I have to trust I’ll receive critical messages in my inbox. I have to trust I can access my inbox. I have to trust my GPS will take me to my destination. I have to trust that my thermostat won’t get hacked and run up my utility bill. I have to trust that all the sensors in my car don’t fail and cause an accident (because I relied on them). I have to trust the elevator doors will open back up when they start to close on my arm. I have to trust my router will keep giving me internet service (and even that I still have to unplug and plug back in on occasion).

As computers have evolved, we’ve been quick to delegate so many human activities to them. As multi-purpose computers have proliferated, we’ve been even quicker to delegate to the software these computers run. And as a commercial software developer myself, who’s built products for other companies and my own, I know how easily bad code can make its way into the real world. So ultimately, I don’t ever fully trust it, no matter where it lives.

It’s true that some sectors have much more stringent processes around software development than, say, your average startup. There’s checks and double-checks, automated tests, internal code review, open source review, QA teams, and so on. There are ninja guru all-star programmers. It still doesn’t prevent bugs from creeping in and living in a piece of software for 15 years. It doesn’t stop “autonomous” vehicles from crashing. It doesn’t prevent data breaches or data loss; it doesn’t keep away security vulnerabilities.

One might say that bugs are as much a part of software as its features are — that as much as we try to avoid all bugs and only build features, “bugs are normal,” (to riff on CJ’s recent blog post).

This observation, or belief, is what drives how I build my own software. So I value stable languages over trendy hotness; I prefer strongly-typed languages and relational databases. When a part of my software works, and continues doing what it’s supposed to over time, there has to be a damn good reason to ever touch it. This may lead to slower product development, but I can sleep at night, knowing the software that exists today is very stable overall (though I remain forever prepared for future changes).

Of course, software occasionally needs to change, and as features grow, I assume bugs too could grow. So despite a productive week of changes on Write.as, today a small bug manifested when I moved the application around a bit: I hadn’t actually told the computer where those pieces went. And things got a little wonky for users.

Whenever this happens, I calmly fix it — bugs are as normal as features. I patch it and publicly explain what happened. I am humbled — but really, humbling myself is the best I can do in this situation, once the software has been fixed.

On a wider scale, humbling oneself is the best anyone in this industry can do when they fail to meet expectations. I don’t have to mention that this industry is the one setting their own unrealistic high-tech expectations — “Military-grade security!” “Self-driving cars by 2017!” “We’re saving the publishing industry!” — but that’s for another time.

For now, I don’t think we’d be worse off for recognizing that bugs are as much a part of our high-tech world as features. Whether it’s a technical bug like my missing variable, or a human bug like Google’s vast data gathering, software does some things it’s designed for, and plenty that it’s not. Best to set our expectations accordingly.