The impact of the open source model on security is an oft-studied topic. Rather than try to add anything new, I wish to point out some helpful references and try to address a couple of common misconceptions I have encountered in the clinical trial software and enterprise IT communities.

First of all, using an open source application, database, server, etc. DOES NOT mean that your data is accessible to anyone who tries to access it! Astonishingly, this is a fairly common misconception for people who only have a casual understanding of what open source software is. In fact, open source software powers some of the most secure software systems on the planet that require highly regulated, very fine-grained control over which users can access certain data.

Security in open source software is based on a rejection of the “security through obscurity” concept, and instead relies on an approach far closer to the centuries-old scientific model of peer review. From the beginning of a project, open source developers understand that their code is going to be subject to review and critique by a potentially large community. Rather than simply making vulnerabilities hard to find, open source developers have to rely on sound architecture and design principles. System access privileges, user authentication rights, and encryption features cannot easily hide “back doors” and each programmer is motivated to not only produce code that will pass a test suite, but produce code that will stand up to the (often severe) scrutiny of their peers and users. In this fundamental way the open source model has been proven to result in more secure solutions than a closed-source, proprietary development model.

This is not to say you cannot find counter-examples! Of course, any specific technology, open source or not, should be evaluated against your particular security criteria. For further information on this topic, check out David Wheeler’s reasoned, in-depth discussion of open source and security here.