Testing Strategy
4.3 THE TWO-DIMENSIONAL TESTING STRATEGY CHESS BOARD
Now we have four chess pieces (static testing, white box testing, black box testing, and performance testing) that we can use to plan our winning moves. What does the chess board look like ? Figure 4.1 shows you the empty test planning “chess board” as a starting point. Figure 4.1 is Table 2.1 development stages for columns by software layers to be tested for rows.
The software testing techniques (chess board “pieces”) we just discussed and their icons are shown in the upper right legend and are placed on appropriate squares in the chess board.
Software
Application
Connectivity (private, Data resources (data Security (access,
Operating system
Preliminary investigation
+ Analysis
Preliminary construction
Final construction
Ship or
Post- implement
Static White
Performance
Black
X
Too late to Software testingPhased development Figure 4.1 The testing strategy chess board
At the bottom along the horizontal axis, or x-axis, are the phases of the phased development methodology (PDM) discussed in Chapter 2. The PDM starts with the left-most column and proceeds left to right until implementation is complete. Each subsequent column to the right represents the next phase of development.
At the left side along the vertical axis, or y-axis, are the software platform layers necessary to operate a typical software application. The bottom-most row represents the most basic software, the operating system, the software that “talks” directly to the hardware. The next layer up is security software that restricts/allows access to all the layer activities above it. The next layer up is data resources that provides fi le or da- tabase management of data stored on fi xed or removable media. The next layer up is connectivity that provides interchange of tasks and data among software components located on different computers via networks. These connected computers can be lo- cated physically in the same room, the next room, the next building, the next city, the next state, the next country, or the next continent. Finally, the topmost row represents the application under development that is the primary focus of the test plan. The exis- tence of layers below the application under development gives us a strong indication that planning testing just for the application under development may be insuffi cient.
So far, software testing may appear to be a simple choice among a number of testing techniques whenever a tester sees the need. In fact, the strategy for using these techniques is much more complex. The key to successful testing is to de- velop the appropriate testing strategy for the application under development and the
4.3 The Two-Dimensional Testing Strategy Chess Board 71
Software testing
Phased development Software
Application
Connectivity (private, Data resources (data Security (access, Operating System
Preliminary investigation
+ Analysis
Preliminary construction
Final constrnuction
Ship or
Post- implement
Static White
Performance
Black
X Too late to test
X
Figure 4.2 Testing strategy for the application under test
supporting layers beneath the application before any actual testing is begun. The testing strategy involves using an intelligent combination of testing techniques.
The testing strategy discussion will fi rst focus on just the top row, the applica- tion under development. Assume for current discussion that the application under development will be custom written. Off-the-shelf software package testing will be discussed later in this chapter. Figure 4.2 shows you an optimal placing of chess pieces on the test planning “chess board for your custom-written application under development (top row).
For the Preliminary investigation, Analysis, and Design phases (leftmost column), there is only a magnifying glass strategy in the application under de- velopment (top row) indicating only static testing to be planned for these phases.
At this stage in the life cycle, the application exists only as design documents:
requirements, specifi cations, data structures, and so forth. No program code has been written yet. So the only development artifacts that can be tested are the docu- ments. This kind of testing done at this stage in the life cycle is concerned with two issues:
identifying incomplete, incorrect, or confl icting information within each document and across all documents that describe all aspects of the applica- tion to be developed, and
confi rming that the document objectives are testable when they have been translated into software.
For the Preliminary construction phase (second column from the left), there is a magnifying glass, a white box, and a black box strategy piece in the application under development row. At this phase in the life cycle, there is a rich set of artifacts to test: environment setup documentation, program source code, data, and program code that can be executed. Besides source code walkthroughs (magnifying glass), there is testing of the newly written code paths (white box) and code input/output behavior (black box) as the written code becomes complete.
For the Final construction phase (third column from the left), there is a magnifying glass, a black box, and a hammer strategy piece in the application under development row. Some of the later produced documentation like user’s guides, training guides, installation guides, and operating manuals need to be tested here.
Testing “inside” the code (white box) is no longer practical because all the code components have been “packaged” or integrated together via compilation, linking, or bindings. Testing of the packaged, more complex code component inputs and outputs (black box) is continued during this phase. The fi nal testing that remains at the end of this phase is verifi cation that the new application installs correctly and operates properly (both black box testing) in its documented production environment.
The hammer represents two different kinds of performance testing strategies:
performance baseline and workload. In traditional performance baseline testing, 1.
2.
response times for single transactions or activities in an empty system are verifi ed against performance requirements as an extension of black box testing. This per- formance baseline testing is a wakeup call for the developers, knowing that a slow transaction in an empty system will get no faster as more transactions are added to the system. As the performance baseline results begin to fall in line with require- ments, load testing of large numbers of transactions is planned and performed. The load testing decisions about the mix of transactions and how many of each trans- action to test comes from a business workload analysis that will be discussed in Chapter 9.
For the Ship or Install phase (fourth column line from the left), we suggest that it is too late to test because the application is no longer available to the development team. Another way to say it is, “when the application is ready to ship, by defi nition the testing is done.”
For the Post Implementation phase (last column to the right), there are mag- nifying glass and hammer strategies in the application under development row.
The static testing (magnifying glass) of implementation checklists and fi rst use of operational manuals are done after the new installation is verifi ed correct. Les- sons learned documents are also static tested for thoroughness, completeness, and accuracy. The fi rst few days and weeks of new application operation are moni- tored to compare business workload and application performance test results with actual business workload and actual application performance under that work- load in production. Comparison discrepancies found in either workload or perfor- mance testing become issues either for short-term solutions, for example, faster hardware, or longer term solutions, for example, redesign next release for better performance.
When a company purchases a software package, the development and testing situation is similar to the Final construction phase of custom-written software. The only application artifacts to test are the documentation and executable code. No requirements or specifi cations or source code are provided with purchased soft- ware. So you test what is available, namely the documentation (magnifying glass) and the input/output behavior (black box) against your company’s purchase evalu- ation criteria. Performance (hammer) testing is done in the intended production environment with samples of real business data to validate the software package performance against your company’s performance criteria. Companies that do not insist on testing a purchased package as a prerequisite to the purchase will always be disappointed with the products they buy.
Next release testing
Changes, corrections, and additional features are an inevitable part of the software development life cycle regardless of whether it is custom code or a purchased package. Just consider how many “versions” of your word processor you have installed in the last 5 years. For a next release, the development and 4.3 The Two-Dimensional Testing Strategy Chess Board 73
testing activities typically follow an abbreviated version of the PDM. Many of the critical design decisions have already been made. The next release probably represents additional functionality within the context of the current design. So both the development and testing follow similar plans from the previous release and invest effort most heavily in the new or updated code required for the next release. If good test planning is done during the original development project, most of the black box test scripts are designed to be reusable in subsequent re- lease tests. The reuse of these tests is called “regression testing.” The primary purpose of regression testing is to verify that all the changes, corrections, and additional features included in the next release do not inadvertently introduce errors in the previously tested code. Regression testing will be discussed further in Chapter 7.
Figure 4.3 shows the updated testing strategy chess board. The top row representing the application under development test strategy is now complete and ready to drive the test plan. The remaining four rows representing the supporting software layers have their test strategies copied down from the fi rst row as the fi rst draft of their test strategy. The question mark to the right of each row in- dicates the need to validate or modify the draft test strategy at each subsequent layer.
Software testing
Phased development Software
Application
Connectivity (private, Data resources (data Security (access, Operating system
Preliminary investigation
+ Analysis
Preliminary construction
Final construction
Ship or
Post implement
Static White Box
Performance
Black Box
X Too late to test
X
X X X
X Planned
?
?
?
?
Figure 4.3 Testing strategy for the supporting software layers
If all of the support layers for the application under development have been used successfully many times by the developers, then the support layers are considered
“trusted,” and only cursory test planning is necessary to reverify their “trustedness.”
If any of the support layers are new to development (and production by implication), then you need to seriously consider a full test plan for that support layer and all support layers above it. Strongly consider testing the support layers as far in advance of the application under development coding as possible. If the new support software layer does not pass all verifi cation tests, the developers could be forced to redesign their application in mid-stream to use different support software … that should likewise be tested before redevelopment is too far along.
Popular approaches for designing e-business software have presented the support layer testing strategist with a new dilemma. The design of the new e-business soft- ware may rely on trusted support layers, but the combination of trusted components is new to development and production. A cautious, conservative testing strategy is recommended. Consider testing the new support combinations more thoroughly than if they were truly trusted but less thoroughly than if they were completely new components. Test key features and functionality that the application under develop- ment must rely on. If no issues arise, complete the testing at a cursory level. If issues arise, deepen the testing effort to help the developers quickly formulate a go/no go decision with this support combination.
4.4 THE THREE-DIMENSIONAL TESTING STRATEGY