“Open Source” Does Not Always Equal “Safe”

On this website, and on many other privacy and security websites, you will find people espousing the gospel of open source technology. This is an important thing. This year, Switzerland suffered two separate scandals where the US Central Intelligence Agency was found to be operating shell corporations within the country who sold tech equipment to foreign governments and armies that were equipped with encryption backdoors, giving the American intelligence community easy, front-row access to the sensitive communications of other nations. Open source could’ve prevented this. Open source software would’ve allowed anyone to look at the programming and operating system on the device and say “hey, something’s not right here.” However, I think that sometimes the privacy community oversells open source.

I often see privacy newbies espousing open source without knowing why. I see people say things like “I heard [X Service] is bad because it’s not open source,” but they don’t actually know why that is. The answer is that open source – as a general rule – tends to respect your privacy more than the average person. Because the code is open, anyone can examine it to ensure that it does what it says. Additionally, because anyone can examine it, people are more likely to find bugs and offer fixes that can be quickly implemented. However, the operative word in there was “can.”

A recent study from GitHub found that on average, vulnerabilities exist in open source software for over four years before being patched. Now it’s important to understand the context of this study: GitHub examined 56 million developers and over 60 million repositories. Out of those 60 million codes I'm certain that many of them are just hobbies, uploaded by the creator as a backup, abandoned, or even as a “I made this for myself but if anyone else wants here, it is” thing. Those all probably came with “buyer beware” terms. But even that can only account for maybe a few ten thousand, at the most. Most of these codes were probably uploaded with the intention of being shared and spread around.

Here is where we run into an interesting issue. I believe in supporting the little guy. Everyone was once a little guy. Walmart, Starbucks, Microsoft, everyone. And you can believe that those big guys have since lost their way, and maybe that’s true, but the point is that they were once little guys. Even in the open source communities, the rockstars – Ubuntu, Bitwarden, Signal – they were all once nobodies. The little guys need our support to become sustainable and successful. I firmly believe and respect that. But the little guys come with risks that need to be recognized. Security researchers are people, too. They have day jobs (usually, some of them are lucky enough to be full time researchers), they have personal lives, and they only have so much time they can devote to examining code. The smaller the developer, the less popular the code, and that means the less eyes on it examining it for weaknesses. In a big, well known project like Signal and Mastodon, there’s thousands or even millions of people using it and laying eyes on it – not to mention many of them can afford to pay for proper security audits. But in smaller, lesser popular projects not so much.

So no, open source doesn’t automatically mean privacy respecting or secure. Most malware is, by definition, open source. Once a malware gets discovered, there’s websites where researches can share it so that other researchers can examine it, pick it apart, update their own virus definitions, and otherwise study it. Malware is literally “malicious software.” It’s a perfect example of how open source does not automatically mean private, secure, or safe. So does it still matter? Yes! All things being equal, open source is always better. The potential still exists for the code to be reviewed by someone who understands this stuff and to be improved upon. The potential also exists for someone else to come along and go “hey, this is a great project but this particular thing could be better, here’s my fork of it.” This is why there’s a billion web browsers out there, because someone saw something open source like Firefox and Chromium and said “could be better.”

Is it actually better? That’s a tough question. That’s where threat modeling comes in. But it’s important that you be educated when building your threat model. Open source is better, unarguably, but it doesn’t mean you should blindly trust it anymore than the use of the word “encrypted.” It’s how the encryption is implemented that matters, and it’s how the open source nature of the software is used to better the software that determines if it can be trusted. You still need to consider what information you’re planning to entrust to that software, what could go wrong, as well as a host of other considerations like update frequency, reputation, and more. As a fellow little guy, I’m not saying don’t trust the little guys. But I am saying to exercise caution.

You can find more recommended services and programs at TheNewOil.org, and you can find our other content across the web here or support our work in a variety of ways here. You can also leave a comment on this post here: Discuss...