Why test software? It is surprising how many corporations today, large and small, still ask this question. Testing is often misunderstood and therefore undervalued.
At its most basic level, software testing is intended to reveal bugs: inconsistencies or deviations from the expected behaviour. However “inconsistency or deviation from the expected behaviour” is a very subjective concept. It is not uncommon to hear that a tester had to fight with the product manager or developer about the validity of a bug they'd found. And that's the story you hear from professional testers. In many cases, a test team can still be made up primarily of support staff with little to no field experience, or knowledge about strategy of software testing. Even while this kind of team may be comprised of very intelligent people with vast knowledge about the particular software they are testing, without a broader understanding of why and how software should be tested, they are likely to meet with frustration and miss the real benefits of a sound testing approach. In order to reap those benefits, it's important to understand how software testing has traditionally been approached, and the skills necessary to accomplish basic to advanced levels of testing today.
A simple Internet search will show you many studies on the costs of fixing a defect late.
Businesses, by their nature, measure the value of a task by the dollars it brings versus the dollars it costs: the so called ROI – Return On Investment. But calculating the costs of a theoretical defect released into production is only a guessing game. A simple Internet search will show you many studies on the costs of fixing a defect late in the development cycle, or even post-production. A look at news headlines will show the amount of effort large corporations have to invest in restoring their public image after confidential customer data was accidentally released on the Internet. But if buggy software is released, how do you put a price on loss of customer confidence, loss of market share, and in extreme cases perhaps loss of life?
Testing will never reveal 100% of all defects. According to Cem Kaner, “Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test." The information is compared against specifications, business requirements, competitive products, past versions of this product, user expectations, industry standards, applicable laws, and other criteria.
Each of these criteria may be given different weight. For example, banking software has to abide by applicable laws and industry standards first, whereas a video game has to satisfy user expectations above all else.
Checking vs. Testing
In the classic (old school) development life-cycle, developers did just enough to prove the software works – they only checked that their software did what they thought it was suppose to do. That was all the testing that was done, and there are companies even today that still believe this is all that is needed. This is not testing, this is only checking your assumptions.
Even in high-school we were encouraged to have someone else proofread our assignment paper before submitting the final draft to the teacher. With the complexities of today's software, it is imperative to have testers who provide such a check for software. “Given enough eyeballs, all bugs are shallow,” is a quote attributed to Linus Torvalds referring to his almost fanatical approach to releasing flawless software. Even when using a peer code review tool, like Collaborator or GitColony, defects will still make it into the testing phase. Positive testing – testing which does not call out any error conditions in the software – is affectionately called: testing the happy path. You are only looking at the perfect-world scenarios, where nothing ever goes wrong and everything goes according to a plan.
The real world is filled with users that are computer illiterate, users that make mistakes, and of course users that have malicious intentions.
This testing is important, and often the results of these tests is what management and stakeholders are most interested in, because it proves that the initial business requirements were met. However, the real world is filled with users that are computer illiterate, users that make mistakes, and of course users that have malicious intentions. All these (often unstated) business requirements have to be tested as well, and in order to avoid the catastrophes mentioned above, they have to be tested more thoroughly than the happy-path check done by development.
Further, even if developers check their assumptions to their satisfaction, the tester can provide an alternate point of view of the initial assumptions (the interpretation of business requirements or a standard). It is not uncommon that an industry accepted standard has a very broad range of interpretations. A perfect example is the need for cross-browser testing. Supposedly all browsers should adhere to one HTML standard, however it is now common knowledge, and an accepted reality of developer’s life, that developers have to go to great lengths to have their applications work on all browsers. The point of testing is not just to find defects, but it is to be a end-user advocate. It is not enough for software to behave correctly, it should actually make the end-user's job easier (and life better?) .