Anyone who has ever managed a project has probably had to make a decision between delivering at high speed, high quality, or low cost: As the saying goes, you can only pick two. This is usually as true for the delivery of software as it is for anything else, but mounting pressure to digitally transform and continuously deliver updates has made speed a default requirement for most organisations. This leaves a choice between quality and cost, which often comes down to a decision about testing.
Testing—especially unit testing—has been an underappreciated stage in the software delivery lifecycle (SDLC) for decades. It’s historically been slow, resource-intensive, and less interesting than the development of new features, which may be why the primary motivation to write unit tests for many developers is external pressures, e.g. management or customer demands, rather than their own conviction that it’s worth doing. Within organisations that enforce code coverage targets, mandated manual testing can feel a lot like being told to eat your vegetables because they’re good for you.
While testing has gained some ground as organisations see its positive impact on quality, and consequently place a higher value on it, costs remain high and appear to be on the rise. In the Capgemini World Quality Report 2017–2018, the senior IT executives who responded to the survey said they expect to allocate 32% of the total IT budget to testing by 2020, up from the current 26%.
With an increase in both the cost of testing and its importance, will the decision between budget and quality become even harder to make? Maybe not: New, disruptive AI technologies might have finally made it possible to question whether we have to make this choice in the first place.
Meet shorter timescales with agile software
Since the agile manifesto was written in 2001, agile software development methods have encouraged increasingly shorter development cycles and the faster delivery of application updates, and shifting consumer expectations have set this new standard in stone.
Cloud-native companies with flexible software architectures can often meet ambitious time-to-market goals. In contrast, large enterprises weighed down with legacy code that was written in a time before unit testing typically lack a test suite. These test suites would otherwise facilitate the refactoring process and make it possible to release frequent updates without breaking old code. 31% of IT leaders in financial services who responded to the 2019 Digital Realty Survey said they saw an issue with their company’s legacy infrastructure and felt it constrained their ability to adopt new technology and experience the full benefits of the technological revolution.
Staying competitive requires finding a way to meet the rapid time limits imposed by rivals and regulations while still delivering a product of high enough quality.
Don’t overlook the importance of testing
For those unconvinced by the benefits of testing, the costs of not testing can perhaps provide a stronger argument. Unit testing, as the most basic and foundational form of testing, makes it possible to catch bugs in the earliest stages of the SDLC, which can save businesses thousands, if not millions of pounds (or even more, depending on the bug) compared to implementing fixes in later stages. Besides the immediate financial cost to repair regression bugs, downtime of a feature users depend on can cause sometimes irreparable damage to user trust and your company’s reputation.
There’s also the long-term cost of skimping on testing in the short term to keep in mind. Any new code written without clear, documented tests is only a few years away from becoming legacy code itself; this is how technical debt is incurred. When being just days ahead to releasing a new feature can make all of the difference, this technical debt can quickly take you from being ahead of the game to lagging behind the pack.
Embrace continuous integration and development
Automation has been a boon for the organisations that have adopted it, but the technology making it possible isn’t new. Jenkins, the open source automation server that facilitates continuous integration and continuous delivery (CI/CD), was released with its current core features back in 2011; not much has changed in terms of its technological offerings in the past eight years.
Instead, a cultural shift at both the organisational and individual levels has increased the uptake of existing automation tools: attitudes are moving towards an awareness of the need for automation and a higher valuation of testing. A 2018 survey on developer trends by Digital Oceans found that 58% of the developers surveyed are already using continuous integration solutions. Among those who are not yet using a continuous integration or delivery solution, 43% indicated that they plan to move to CI/CD.
With shifting attitudes comes a higher uptake of these automation tools. The move towards continuous delivery in particular required a new mindset among potential users of accepting the idea that committing code is followed by running tests, that these tests should exist, and that there will be immediate action on any tests that fail. Compare this to attitudes of the past, where developers wrote software without caring about testing and then QAing was done later, often by someone else.
Automation technology is already benefiting business
This adoption of automation has already helped soften the blow of choosing between the cost-speed-quality tradeoff. The authors of the book Accelerate: Building and Scaling High Performing Technology Organizations found that teams that adopt automation do tend to deliver higher quality code more quickly.
While undeniably helpful for speed, automation also tends to be cost effective, and automation for testing specifically is a good way to quickly identify and resolve code quality issues. In the same Capgemini World Quality Report 2017–2018 cited above, 60% of respondents reported that test automation improves their ability to detect defects. 57% saw an increase in the reuse of test cases through applying automation, and 54% have seen a reduction of test cycle-time since implementing automation.
So with automation already improving speed, quality, and cost, the next big advancement must necessarily be for the biggest current bottleneck in the testing process: automating the creation of the unit tests themselves.
Look towards the future of AI in coding
In the past three years, AI has sufficiently advanced to a point where it is able to develop code that has real business applications. Using a mathematical reasoning and learning engine, this type of technology can crawl every path in an existing codebase and automatically generate unit tests for various outcomes, including edge and corner cases. Before now, even organisations that valued unit tests had to lose their developers’ time to them, or outsource at a high cost.
AI for code can help developers prevent bugs and address a number of challenges, including the shortened software development lifecycle of products, by enabling them to automatically generate unit tests alongside the code they are currently developing. This provides immediate feedback that is even more useful than using outsourced tests, because they are present to consult at the same time that the developer writes source code. For developers, this simultaneous approach means there is no interest decay and no need to shift focus or try to remember what you were thinking when you wrote the code.
It can also allow software architects, developers and IT managers to understand the impact of changes or migrations to unknown legacy code. Automated unit tests enable them to make more informed decisions about the development process.
For CIOs, CTOs, managers, and team leaders, AI for code can identify the areas of your product that are doing better or worse in terms of risk, and which aspects are likely to be seen as higher or lower quality. This is particularly helpful when showing where the existing code is not covered and automatically generating tests to increase coverage.
The cost versus quality trade-off between testing and budget has influenced boardroom discussions and IT strategies for too long. For any organisation to compete today, both are too important to have to decide between them. Fortunately, things are looking up: with existing CI/CD tools and the automatic creation of unit tests with AI for code, it’s getting easier for businesses to have it all.
Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.