7 basic principles in software testing

Software development has evolved into a relatively consistent model across organizations in which developers are being increasingly freed from testing tasks. While it is true that developers often have their own tools to test their code, normally with a unitary approach, nowadays it is more commons that companies look for more than just testing the code by adding QA processes. 

Such QA processes improve not only bug detection but also bugs prevention. This is why, in my opinion, a QA engineer must be one step ahead of testing and implement processes and systems aimed at finding the root cause of defects. In this context, experts have been proposing a series of guidelines throughout the last decades that can be summarized in the following 7 testing principles:

1. Subjecting the code to a set of tests can show the presence of its defects but not their absence

A test set, if well designed, is a tool to detect defects in the product. According to International Software Testing Qualifications Board, test sets end up reducing the probability of releasing the software to the market with uncovered defects. But even if can’t find defects with our test plans, we cannot take guarantee the software is bug-free. 

2. Achieving full testing coverage is impossible

Testing a software product completely is not feasible. It is not realistic to create test sets that cover each permutation, combination of inputs, each value that all variables can take, all preconditions, etc. This is not feasible except in specific cases where the range of possible combinations is quite constrained. Instead of achieving absolute test coverage, tests design should aim at prioritising and  mitigating risks.

3. Testing the software on an early stage of its production saves time and money

Ideally, test execution should take place as soon as possible since it speeds up the correction of errors and the detection of possible uncovered defects which fixing  would entail a high cost if discovered once the product is put on the market.

4. Software flaws tend to stack together

Evidence in recent years in the testing industry, suggests that a limited number of modules contains most of the defects that cause functional failures. Detecting where most defects are usually located will produce very valuable information to perform risk analysis.

5. Using the same tests over time will not prevent the occurrence of new failures.

If the same test sets are run time we will not find new failures since we would only would be targeting to already known defects. Preventing new or hidden bugs, however, require QA engineers to renew the tests as well as the data used to run them.

6. Testing is context-dependent

The strategies used to design test cases change according to the type of product or the market where it operates. A program that monitors vital signs of the patients of a hospital necessarily requires highly exhaustive testing while an online shopping application will require a quick response to user demands.

7. Producing bug-free software it’s a fallacy.

Some companies may expect their QA engineers to sort of guarantee the all possible tests are executed, thereby producing a flawless product. However, as we see in the second principle, an absolute coverage is unreachable and aiming at it is inefficient. 

Apart from this, it is fallacious to think that producing failure-free code will guarantee its success. We can try to exhaust every permutation we can think of in our tests and even if we achieve it, the software can ultimately disappoint the

customer needs. It is important therefore not to confuse exhaustive testing with  the product success in the market.

QA engineers should design tests as exhaustively as possible but focusing on product usability,  user experience and  especially on giving making sure the product meets the user’s needs.