Skip to main content

The mortgage dilemma, or why open source is more secure than closed source

Aug 02 '17

Too many cameras

In my role as salesman for Third & Grove I frequently explain the benefits of open source software to the leadership of other organizations with whom we are hoping to work. A frequent question I enjoy answering is about security: “Is open source secure? I’ve heard it isn’t.”

The answer, of course, is that open source software can be as secure or insecure as any closed source solution, but (here comes the controversial bit) a properly managed open source platform is far more secure than a closed source application. Why? Simple, the mortgage dilemma applies to closed source software but not to open source. Let me explain.

When a security vulnerability is found in a well maintained open source project it’s typically reported to a security committee, whose members all work voluntarily. That committee follows a documented process of verification, resolution, patch release, and disclosure. The primary human emotion at play is likely obligation, since people had to volunteer for the committee and thus publicly agreed to help.

What about closed source software? Well, closed source software are owned and managed by for-profit companies. When a security vulnerability is reported it may also have a standard process that is followed. The problem, however, is that every single person involved in the life of that security vulnerability -- the person that reports the issue, the person that processes security reports, the person that circulates the issue internally, and the variety of managers, directors and VPs whom manage these people  -- all have one really important thing in common: They all have a damn mortgage.

I'm not worried about the security issues I know about, I'm worried about the ones I don't know.

Their job, of course, is the means by which they pay for that pesky mortgage. Mortgage bills keep coming (until they don’t), month after month, an ever hungry beast that must constantly be fed. The problem is humans aren’t rational actors that make purely logical decisions, they make somewhat rational decisions limited by a whole host of factors like how easily they can solve the problem, the time they have to make the decision, and the boundaries of what they know. The need to protect your job to keep paying a mortgage presents a huge opportunity for biased thinking.

This isn’t to say people are bad by default. They aren’t. But how systems respond under pressure matters a great deal. Any number of issues can compromise the responsible management of a security vulnerability when mortgages are at play, like an upcoming earnings call, recent stock volatility, fear of the very real tendency for people to “shoot the messenger”, a recent negative performance review, badly designed bonus incentives programs (for better or for worse you get what you measure, just ask John Stumpf), or having recently been responsible for another security vulnerability. Social proof -- the tendency for humans to use the behaviour they see around them as a guide for their own behaviour -- creates further opportunities for the negative culture of a business to weaken the security of a system. Of course social proof is at play in the communities of open source software, but an open source software community is a very different thing than an office culture where you see the same people every day.

A common maxim in information security is that you don’t need to worry about the security vulnerabilities you know, you need to worry about the ones you don’t know about. It’s wise advice, and serves as a reminder that there are a whole host of reasons why a piece of software might be more or less secure. Like with many non-human things in life, the security of software is impacted far more by human factors then we might care to admit, or realize.

Photo (cropped) credit: Matthew Henry

Read more about: