Overview of Structured Testing
3.3 SPRAE—A GENERIC STRUCTURED TESTING APPROACH
Dr. Edward L. Jones, Florida A&M University, developed and published a simple, fi ve-step checklist for software testing. His intended audience is software developers
1.
2.
3.
3.3 SPRAE—A Generic Structured Testing Approach 61
with the expectation that software developers who know how to test make better software developers. We agree with Dr. Jones’ expectation and subscribe to his statement:
“… SPRAE is general enough to provide guidance for most testing situations.” [17]
We also believe that Dr. Jones’ statement intimates that the greater value of SPRAE is in the hands of experienced software testers.
Our professional examination of SPRAE has revealed SPRAE to be as robust as the checklists available from the three common sources of testing methods previously mentioned. This implies a double benefi t for those software professionals and students who invest the time and effort to learn SPRAE. The fi rst benefi t is core testing concepts and a checklist that does deliver good testing. The second benefi t is substantially transferable skills from testing job to testing job, regardless of the customized test- ing checklists used by the new employer. Recall that most of the differences among commercial testing methods are found in the development cycle terminology and mile- stone positioning. Learn the new development life cycle and terminology and prior testing checklist experience can be leveraged in the new development organization.
The SPRAE checklist has fi ve items. The acronym “SPRAE” is derived from the fi rst letter of each checklist item.
Specifi cation Premeditation Repeatability Accountability Economy
3.3.1 Specification
The specifi cation is a written statement of expected software behavior. This software behavior may be visible to the end user or the system administrator or someone in between. The intent of the testing specifi cation is to give focus to all subsequent test planning and execution. Dr. Jones states the corollary of this principle to be “no specifi cations, no test.” This is the pivotal concept most often misunderstood about software testing. Dr. James A. Whittaker at the Florida Institute of Technology rein- forces Dr. Jones’ criticality of specifi cations as a prerequisite for successful testing by continuously asking his testing students, “Why are you testing that?” [18]
3.3.2 Premeditation
Premeditation is normally expressed as written test plans, test environments, test data, test scripts, testing schedules, and other documents that directly support the testing effort. The actual quantity of documentation varies widely with the size and duration of the testing project. Small, quick testing projects need only a few,
concise premeditation documents. Large, extended duration testing projects can produce stacks of premeditation documents. One criticism publicly leveled at most commercial testing methods is that their required premeditation documentation is often overkill, wasting valuable tester resources and testing schedule to produce documentation that does not add commensurate value to the testing effort.
The message to the new software tester is clear. Too little premeditation places the testing project at risk to fail because of inadequate planning. Too much premeditation places the testing project at risk to fail because the extra time consumed in planning cannot be recovered during test execution.
3.3.3 Repeatability
This item arises from a software process dimension called “maturity.” The Software Engineering Institute at Carnegie-Mellon has established an industry-wide yardstick for measuring the relative success that a company can expect when attempting software development. [19] This yardstick is called the Capability Maturity Model Integration (CMMi). Based on the CMMi, successful development and testing of software for wide ranges of applications requires the testing process to be institutionalized. In other words, once a test has been executed successfully, any member of the test team should be able to repeat all the tests and get the same results again. Repeatability of tests is a mature approach for test results confi rmation. A testing technique called
“regression test” described in a later chapter relies heavily on the repeatability of tests to succeed.
3.3.4 Accountability
Accountability is the third set of written documentation in SPRAE. This item discharges the tester’s responsibility for proving he or she followed the test plan (premeditation) and executed all scheduled tests to validate the specifi cations.Con- trary to many development managers’ expectations, testing accountability does not include the correction of major defects discovered by testing. Defect correction lies squarely in development accountability. Supporting test completion documen- tation normally comes from two sources. The fi rst source is the executed tests themselves in the form of execution logs. The more automated the testing process, the more voluminous the log fi les and reports tend to be. The second source is the tester’s analysis and interpretation of the test results relative to the test plan objectives.
One signifi cant implication of the accountability item is that the tester can determine when testing is complete. Although a clear understanding of test completion criteria appears to be a common sense milestone, you will be amazed by how many test teams simply plan to exhaust their available testing time and declare
“testing is completed.”
There exists a philosophy of software testing called “exploratory testing” that is emerging in the literature.[20] This philosophy advocates concurrent test design and 3.3 SPRAE—A Generic Structured Testing Approach 63
test execution. Although some interesting results have been obtained by experienced testers using the “exploratory testing” approach, its premise seems to preclude accountability in the SPRAE context and appears to contradict prudent testing practices for the inexperienced tester.
3.3.5 Economy
The economy item is more representative of a kind of thinking and planning like repeatability than a kind of documentation like specifi cations and premeditation.
The crux of the economy item is testing cost effectiveness, which can be measured in many ways. The introductory chapter examined some of the high-level cost issues around software testing, namely the total cost of testing compared to the total cost of the business risk reduced by testing. This SPRAE item requires the technical teams to develop a detailed testing budget from which the total cost of testing can be computed.
Because software testing is basically another kind of technology project, expected testing personnel and equipment costs are included in the testing budget. Budget items fairly unique to testing include test data preparation, testing environment setup and teardown (not just a desktop computer per tester), and possibly automated testing tools. These unique budget items will be examined in depth in a later chapter.
Finally, the testing schedule can be considered a contributor to the economy of a test project. Because testing is often considered (incorrectly) to be a necessary evil between development completion and deployment, the development manager may consider relegating the testing executions to the third shift where it can be done on schedule without interfering with daily development and routine business. This
“night owl” approach to testing will actually increase the time necessary to complete testing, causing both testing and development schedule overruns.
To understand the reasons for this reverse economy, place yourself in a tester’s shoes executing tests on schedule at 2 A.M. in the morning. One of your new test scripts blows up. Under daytime testing circumstances, you might contact one of the senior end users in your team to determine if the test script is attempting to validate the specifi cations incorrectly. Another possibility might be your contacting the developer to determine if the program code and application specifi cations are in confl ict. A third possibility might be your contacting the system administrator to determine if there is a problem with the recent program build or data load for test- ing. None of these courses of action are available to help you resolve the test script problem. Everybody you need to contact is at home in bed fast asleep. The best you can do is leave notes around the offi ce or on voice mail, close down your testing activity for the night (this problem is a testing showstopper), and go home to bed yourself. What could have been resolved in an hour or two during the day shift will now stretch over 8–10 hours while everybody fi nishes their night’s sleep, fi nd your notes, and begin to respond to your problem. Your testing schedule just went out the window with the fi rst major testing problem encountered.
In summary, SPRAE gives the experienced software tester a simple and effective checklist of fi ve items that can lead to successful testing. Subsequent chapters use SPRAE to examine and demonstrate the breadth of software testing techniques that represent foundation testing skills.
3.4 PUTTING THE OVERVIEW OF STRUCTURED