parties, all test cases were thrown out for the day and the testers (and anyone else that wanted to "play") were urged to look for bugs. Prizes were awarded for the most bugs found, the biggest bug, and the most creative testing technique. The whole thing was a huge morale booster and resulted in finding many bugs, some of which were significant.
But finding bugs, as important as it was, was not the purpose of the party. You see, they then wrote the test case that would have found the bug, which improved the coverage of their existing test set. But that wasn't the real reason they had the bug parties either. What they were really looking for were entire categories or lists (inventories) of things that they forgot to test. How interesting, they were using ad hoc testing techniques to validate the effectiveness of their systematic testing. Who says testers are not creative!
— Rick Craig
Step 9: Maintain the Testing Matrix
As the system matures and changes, so too should the testing matrix. The testing matrix is a reusable artifact that is particularly valuable in determining what regression tests to maintain and execute for any given release (at least which ones to begin with). The testing matrix is also a valuable tool to help in the configuration management of the test cases, since it helps relate the tests to the system itself. The maintenance of the matrix is a huge undertaking, but without it, the testers must virtually start over with the development of their tests for each new release.
Not only is that a waste of time, but there's always the risk that some great test that was created for a previous release will not be remembered for this one.
Commercial tools are available to help document and maintain the inventories and test cases, but the effort required to maintain the testing matrix is still significant.
Requirements
Requirement 1 X
Requirement 2 X X X
Requirement 3 X X X
Features
Feature 1 X
Feature 2 X X
Feature 3 X
Feature 4 X X X
Design
Design 1 X X
Design 2 X
Design 3 X X
Team-Fly
Team-Fly
Black-Box vs. White-Box
Black-box testing or behavioral testing is testing based upon the requirements and, just as the name implies, the system is treated as a "black box." That is, the internal workings of the system are unknown, as illustrated in Figure 5-3. In black-box testing the system is given a stimulus (input) and if the result (output) is what was expected, then the test passes. No consideration is given to how the process was completed.
Figure 5-3: Black-Box versus White-Box Testing
Key Point
White-box or blackbox (testing) improves quality by 40%. Together, they improve quality by 60%.
– Oliver E. Cole, Looking Under the Covers to Test Web Applications, STAR East Conference, 2001
In white-box testing, an input must still produce the correct result in order to pass, but now we're also concerned with whether or not the process worked correctly. White-box testing is important for at least two reasons. Without peering inside the box, it's impossible to test all of the ways the system works (i.e., how the system works). While both black-box and white-box testing can determine if the system is doing what it's supposed to do, only white-box testing is effective at determining if the "how" part of the equation is correct. Generally speaking, if the result of a test is correct, we can assume that the process was completed successfully. This, however, is not always true. In some cases it is possible to get the
correct output from a test for the wrong reason. This phenomenon is known as coincidental correctness and is not necessarily discovered using black-box techniques.
Key Point
White-box testing is also called structural testing because it's based upon the object's structure.
Let's say that we have a system that's supposed to estimate hours based upon the complexity of the task being performed. As estimating experts (at least in this fictitious system), we know that the correct algorithm to predict the hours required to complete a certain task might be y=2x, where y is the time estimate and x is the complexity of the task.
So, we know that if the complexity of a task has a value of 2, the task should take 4 hours to complete.
Key Point
Coincidental correctness describes a situation where the expected result of a test case is realized in spite of incorrect processing of the data.
For example, if we input a value of 2 into the system and get an answer of 4, the system
must be correct, right? It may be, or may not be. Suppose your programmer, for whatever reason, miscoded the algorithm and put in the formula y=x2 (instead of y=2x). If the poor tester is unfortunate enough to put in a test value of 2, the system will give the correct answer in spite of the bad code. However, this is only coincidental. If we run another test with a value of x=3, we would find that our system gives a result of 9 instead of 6!
To find bugs like these, we need to look inside the box. Whitebox testing would have been more effective in finding the sample bug than black-box testing (although probably the most effective way to have found the bug in the example would have been using code inspection).
Another important point about white-box testing is that it allows the testers to use their knowledge of the system to create test cases based on the design or the structure of the code. However, in order to conduct whitebox tests, the testers must know how to read and use software design documents and/or the code.
Key Point
White-box is also known as clear-box, glass-box, translucent-box, or just about any other non-opaque box.
Team-Fly
Team-Fly