Awesome (open) source: Why testing, testing and more testing is the key to improvement
Open source is awesome. Have you ever stopped to think how much of the technology you use today actually originates from open source? The Internet, Android phones, Mozilla Firefox, social media sites and Wikipedia have all been developed using open source. It’s all around you and it’s so ubiquitous that in March this year, a single programmer accidentally broke the Internet by deleting some of the open source code he’d created.
Open source is being credited as the solution to some of our age-old problems. It’s keeping costs low for small and medium enterprises. A 2016 survey showed that open source saves enterprises £30,000 on IT programmes, with 78% of companies using it. It has applications across all sorts of technology - security, development, big data… you name it, there’s open source software for it. Recently, when Adobe Flash was discontinued, developers called for open source to save it.
In fact, the open source paradigm is often defined as a collaborative effort, implying that firms and enthusiasts come together in a non-competitive climate. Its wide use means that it is often referred to as a public good. That’s a lot of awesomeness right there.
The trouble is that this awesome perfection is not all it seems. There’s so much trust surrounding the community, that software developers often erroneously assume that open source components are reliable, patched and up to date. Unfortunately, assumptions like that allow for vulnerabilities like those behind the Heartbleed bug. In fact, more than 50% of the Global 500 use vulnerable open source components.
Flaws exist in open source software for a variety of reasons - for example, they might not have been audited or adequately tested and/or they’re often assumed to be secure only because they make it into a widely used application. In fact, these flaws are often much worse than we think.
Open source, much like any other IT system, is potentially vulnerable to people looking to exploit the software’s marginal weaknesses. Moreover, open source frequently relies on volunteers to address problems - if they’re tired, busy or underfunded, whole projects may die out. Just look at the guy who has been keeping the time for the internet over the past thirty years. What if something happened to him?
There is also no official responsibility chain so information is often spread across many sources, making it difficult to monitor for vulnerabilities.The flaws of such an approach were recently highlighted when research from Germany pointed out that developers who copy and paste code directly into their open source software can also introduce security vulnerabilities if that code comes from flawed online tutorials.
Another piece of research, this time from Black Duck Software Inc., revealed the results of security audits it undertook that showed "widespread weakness in addressing open source security vulnerability risks“.
This is why testing open source is so important. With many companies lacking an open-source policy, it’s important to first analyse and regularly track each open source component. Mapping them out to known vulnerabilities mitigates risks, while it’s also useful to identify other potential problems and threats.
However, companies often use outdated methods or don’t have the capacity to take these steps. A broad variety of surveying tools is available, which companies can use to find open source they use that is known to be vulnerable.
It’s not that open source is less secure than others - in fact, commercial software is just as likely to be vulnerable. It’s more that in the case of open source, we too often hope that somebody else worries about its security. Companies such as Diffblue have recently started to address that challenge. Diffblue are recruiting a team dedicated to analyzing open source code. Because, if rigorously tested and made safer, the benefits of open source certainly outweigh its risks. And that’s just awesome.