Software Testing: Testing Across the Entire Software Development Life Cycle, by G. D. Everett and R. McLeod, Jr.
Copyright © 2007 John Wiley & Sons, Inc.
Because most software platform components come prepackaged (no source code available), the white box techniques cannot be applied to the software platform. Because the software platform behavior is seldom observed directly by the end user, most of the black box testing techniques except intuition and experience cannot be applied to the software platform as well. This is an area of testing that relies on the tester’s own experience of some kind as a system administrator role or the tester’s collaboration with system administrators. Several structural testing techniques have been described here.
8.2 INTERFACE TESTING
Interface testing focuses on data transferred between the application under test and different software platform components. Examples of data transfer mechanisms that need testing include data fi les, application program interfaces (APIs), database re- quests, and network data transfers. One helpful way to develop interface testing is to consider a four-step approach.
First, write tests that cause the application to produce data for transfer but have the transfer itself inhibited. That will allow the tester to validate that the application is producing the correct data in the correct format for use by the receiving software platform components.
Second, remove the data transfer inhibitors and observe if the receiving software platform component deals with the incoming data from the application correctly. This will allow the tester to validate that the software platform is correctly processing the already validated application data. If problems are found with this data transfer, then you have isolated the problem to the vendor’s interface component or its data specification.
Third, write tests that cause the application to request data from other software platform components, but manually substitute the requested data in lieu of a
“live” data feed from the involved software platform. This technique is referred to as “stubbing” the inputs. You will need to create and validate the manual data using the software platform component vendor interface specifications. This will allow testers to validate that the application is correctly accepting the data from the software platform.
Fourth, connect the application to the software platform components and rerun the data requests with “live” data feeds. This will allow the tester to validate that the software platform is producing data per its data specifications. If the problems are found with these data, you have isolated the problem to the vendor’s interface component (does not work as advertised) or its data specifications.
Here is a pictorial view of these four steps.
1st: Application under test → Data (validate)
2nd: Application under test → Data → Support platform (validate) 3rd: Application under test (validate) ← Data ← Manual substitution 4th: Application under test ← Data (validate) ← Support platform
8.2 Interface Testing 123
8.3 SECURITY TESTING
Consider leveraging equivalence classes for security behavior testing. Most se- curity systems have different types or levels of security relating to end user processing restrictions based on job roles. For example, a typical three-level security system would define (1) clerks and other employees who just need to view data at security level 1, (2) clerk managers who need to view and update at security level 2, and (3) security administrators at security level 3 to grant and deny permissions to clerks at security level 1 and clerk managers at security level 2.
A brute force approach to testing these security behaviors would be to col- lect all of the user ID/password pairs in the company (very sensitive corporate information) and test each user ID/password pair to verify that the user ID under test is authorized and has the appropriate security access. In smaller companies, this could require testing hundreds of ID/password pairs. In larger companies, this could require testing thousands of pairs. Applying equivalence class analy- sis to the situation would allow you to choose may be 20–50 ID/password pairs for each level of security. Remember that in each equivalence class of ID/pass- words by security level, you want to choose some valid ID/password pairs for the level (positive testing) and some invalid ID/password pairs for the level (negative testing).
Security for the new application may include encryption of passwords, as well as data that may be sent or received by the application. Testing techniques for encryption are beyond the scope of this textbook; however, we have provided a starting point for further reading. [37–41]
Once ID/password combinations, ID/password pair security levels, and data encryption have been tested, there is one remaining area of security concern. As with all software capabilities, security comes with a performance price tag. It takes a fi nite amount of time (greater than zero) to complete a security activity every time it is needed. Some software designers prefer to do security checking only at the start of a user session. Other software designers prefer to do security checking before each activity a user invokes. Still other software designers use a combina- tion of initial checking and ongoing checking during end-user sessions. Regard- less of the approach that the software designer takes to implementing security, the tester needs to measure the application’s performance degradation specifi cally due to security. We have seen companies decide not to test security performance but to “turn on” security just before the system goes “live” because security was not expected to add noticeable processing overhead. These companies then faced the following question midday the fi rst day the new application is live, “How can we disable security until we fi nd out what is making the application run so slow in production?”
Although performance testing is the subject of the next chapter, it is reasonable to raise the security testing concern here and encourage the software development team to turn on full security as early in the application as practical both from a regression standpoint and a performance standpoint.
8.4 INSTALLATION TESTING
Installation testing focuses on the way the new application or system is placed into its production environment. The installation process itself can vary from a simple startup.
exe that copies all application fi les to their proper place to a complex set of fi les and an instruction manual for an experienced system installer. Regardless of the simplicity or complexity of the installation process, it needs to be tested to ensure that the recipients of the new application or system can be successful at making it ready for use.
The recommended approach is to have a test environment with the hardware platform(s) and software platform set up to look exactly like the intended production environment. Then the test is to execute the installation procedure as written with the fi les provided to validate successful installation.
During the last 10 years, installation processes were weak in helping the end-user installer determine if the installation was successful. There has been a resurgence of vendors that include installation verifi cation aids, both manual and automatic, with the installation packages. Do not forget to test the verifi cation aids too!
8.5 THE SMOKE TEST
With the new, complex software applications, verifi cation of a successful installation is not suffi cient to allow the end user to start using the software for routine business. Two more tasks must be completed fi rst: confi guration and administration. This section deals with confi guration verifi cation. The next section deals with administration verifi cation.
Confi guring an installed application means selecting among a list of optional ways the software can be operated to make the software operate more closely to the specifi c organization’s requirements. Typical confi guration tasks include setting startup parameters and choosing process rules. Examples of startup parameters are the location of data fi les, maximum number of user sessions, maximum user session duration before automatic timeout, ID/password of the system administrator, default date formats, and geography-specifi c settings for language and culture. Examples of process rules are defi nitions of security classes, startup/shutdown schedules, backup schedules and destination fi les, accounting rules, and travel reservation rules.
The smoke test is used to verify that a successfully installed software applica- tion can be subsequently confi gured properly. As you can see by the variety of con- fi guration examples, there are a large number of confi guration combinations possible for most applications. The challenge of the smoke test planner is to identify the most likely confi guration combination for the 10 most important customer installations.
The tester starts with a successfully installed copy of the software and proceeds to confi gure/reconfi gure the software per the 10 combinations. Each time a differ- ent confi guration combination is established, the tester executes minimal steps that demonstrate the software is correctly honoring the new confi guration.
The term “smoke test” comes from the hardware engineering practice of plug- ging a new piece of equipment into an electrical outlet and looking for smoke. If
8.5 The Smoke Test 125
there is no sign of smoke, the engineer starts using the equipment. The software smoke test is not exhaustive like regression testing. Rather, it is an attempt to verify the usability of the most likely fi rst production confi gurations independent of the confi guration test cases that were executed during software development.
8.6 ADMINISTRATION TESTING
Administration of a new application or system is the next operational step after successful installation and smoke test. Administration can include such technically complex activities as applying updates and fi xes to the software. Administration can also include organization-specifi c activities such as adding users to the system, adding user security to the system, and building master fi les (customer lists, product lists, sales history, and so forth).
Administration testing is an extension of functional testing of business activities to functional testing of business support activities. If the administrative software components are developed fi rst, then the results of successful administrative tests can be saved as the starting point for business function testing that relies on cor- rect administrative setup. If the administrative components are developed second to business functions, then the manually built system setup fi les used to successfully test the business functions can be used as the expected results of the administrative component tests.
8.7 BACKUP AND RECOVERY TESTING
Sooner or later, all business software applications fail. The extent of fi nancial dam- age that occurs with this failure is directly proportional to the software developer’s effort to minimize that fi nancial damage. If little thought is given to recovery after failure, the business will not be able to recover. A surprising number of commercial software packages simply instruct you to “start over” when a failure occurs.
If serious thought is given to recovery after failure, a backup strategy emerges that enables that recovery to occur. The accepted approach is that you start your failure defense by periodically making backup copies of critical business fi les such as master fi les, transaction fi les, and before/after update images. Then, when (not if) the software fails, the backup fi les are used to restore the software close to its pre-failure state.
Depending on what resources you are willing to spend on routine backup ac- tivities, the recovery pre-failure state can range from last weekend’s backups (fairly inexpensive) to last night’s backups (more expensive) to all backed up transactions except the one that caused the failure (very expensive but guarantees minimum loss of business). To test backup and recovery processes, you must perform a number of backups, interrupt the application abnormally, and restore the application using just the backups. Recovery data are then validated against the expected pre-failure state.
This testing approach seems relatively straightforward and somewhat intuitive.
Be aware that the authors have seen more companies skip restore testing than per- form restore testing. For some unexplained reason, these companies concentrate on validating the backup schedule and procedures, never trying to restore business from those backups. More often than not, when the untested but now business-critical restore process is used for the fi rst time on real data, the attempt will fail for a variety of preventable reasons. For example, the backup fi les are empty or the backup fi le rotation is erroneous, causing you to write over the backup fi les last weekend that you so desperately need now. It is truly a career-altering experience.
8.8 PUTTING STRUCTURAL TESTING IN PERSPECTIVE
The obvious focus of test planning for software is the application or system under development. A less obvious but just as important focus is the software that supports the new application. Although the support software cannot compensate for a poor application implementation, it can detract from a good application implementation.
The motivation to plan and execute structural tests is to validate this software ap- plication enabler.
8.9 SUMMARY
The objective of structural testing is to validate the behavior of software that sup- ports the software the user touches. This collective support software is often called the software platform. The software platform purpose is basically different from the application software. The software platform is not written for one specifi c busi- ness application. Conversely, a software platform is written as a generic capability that can support many different kinds of business applications at the same time.
Therefore, software platforms are a possible point of failure when a newly developed software is run. The risk of software platform failure is reduced by structural testing techniques.
Because most software platform components come prepackaged (no source code available), the white box techniques cannot be applied to the software platform.
Because the software platform behavior is seldom observed directly by the end user, most of the black box testing techniques except intuition and experience cannot be applied to the software platform as well. This is an area of testing that relies on the tester’s own experience of some kind as a system administrator role or the tester’s collaboration with system administrators. Structural testing techniques include the following:
interface testing security testing 1.
2.
8.9 Summary 127
installation testing the smoke test administration testing backup and recovery testing
KEY TERMS 3.
4.
5.
6.
Software platform Interface testing Security testing
Installation testing Smoke test
Administration testing Backup and recovery
testing
129 LEARNING OBJECTIVES
to define the kind of testing that measures the speed of software
to analyze techniques that simplify the intrinsically complex performance testing of software transaction mixes
to assess the potential business liabilities of ignoring performance testing
9.1 INTRODUCTION
We advance from testing techniques that validate the software behavior to testing techniques that validate the software “speed.” Speed in this context means that a tester measures aspects of software response time while the software is laboring un- der a controlled amount of work, called a “workload.” To make the software reveal its true production speed, the tester must execute the performance tests in a testing environment that approximates the intended production environment as closely as possible. These execution testing techniques are fundamentally different in objective and approach from functional testing where the objective is validating correct code behavior regardless of speed.