Classes are usually a part of a class hierarchy where there are existing inheritance relationships. Subclasses inherit methods from their super-classes. Very often a tester may assume that once a method in a superclass has been tested, it does not need retested in a subclass that inherits it.
However, in some cases the method is used in a different context by the subclass and will need to be retested.
In addition, there may be an overriding of methods where a subclass may replace an inherited method with a locally defined method. Not only will the new locally defined method have to be retested, but designing a new set of test cases may be necessary. This is because the two methods (inherited and new) may be structurally different. The antiextentionality axiom as discussed in Chapter 5 expresses this need [13].
The following is an example of such as case using the shape class in Figure 6.4. Suppose the shape superclass has a subclass, triangle, and triangle has a subclass, equilateral triangle. Also suppose that the method display in shape needs to call the method color for its operation. Equi-lateral triangle could have a local definition for the method display. That method could in turn use a local definition for color which has been defined in triangle. This local definition of the color method in triangle has been tested to work with the inherited display method in shape, but not with the locally defined display in equilateral triangle. This is a new context that must be retested. A set of new test cases should be developed.
The tester must carefully examine all the relationships between members of a class to detect such occurrences.
Many authors have written about class testing and object-oriented testing in general. Some have already been referenced in this chapter.
Others include Desouza, Perry, and Rangaraajan who discuss the issue of when the retesting of methods within classes and subclasses is necessary [2,13,14]. Smith, Wilde, Doong, McGregor, and Tsai describe frame-works and tools to assist with class test design [3,6,7,11,15]. Harrold and co-authors have written several papers that describe the application of data flow testing to object-oriented systems [16,17]. The authors use data flow techniques to test individual member functions and also to test in-teractions among member functions. Outside of class inin-teractions are also covered by their approach. Finally, Kung has edited a book that contains
Class Shape Data for Shape
Member functions for Shape create(figure) color(figure, color) rotate(figure, degrees) shrink(figure, percent) enlarge(figure, percent) duplicate(figure) display(figure)
...
F i g . 6 . 4
Sample shape class.
key papers devoted to object-oriented testing. The papers discuss many of the above issues in detail [18].
6 . 6
T h e T e s t H a r n e s sIn addition to developing the test cases, supporting code must be devel-oped to exercise each unit and to connect it to the outside world. Since the tester is considering a stand-alone function/procedure/class, rather than a complete system, code will be needed to call the target unit, and also to represent modules that are called by the target unit. This code called the test harness, is developed especially for test and is in addition to the code that composes the system under development. The role is of the test harness is shown in Figure 6.5 and it is defined as follows:
The auxiliary code developed to support testing of units and components is called a test harness. The harness consists of drivers that call the target code and stubs that represent modules it calls.
The development of drivers and stubs requires testing resources. The driv-ers and stubs must be tested themselves to insure they are working
prop-Unit under test Driver
Call and pass
parameters Results
Stub 1 Stub 2
Call Acknowledge Call Acknowledge
F i g . 6 . 5 The test harness.
erly and that they are reusable for subsequent releases of the software.
Drivers and stubs can be developed at several levels of functionality. For example, a driver could have the following options and combinations of options:
(i) call the target unit;
(ii) do 1, and pass inputs parameters from a table;
(iii) do 1, 2, and display parameters;
(iv) do 1, 2, 3 and display results (output parameters).
The stubs could also exhibit different levels of functionality. For example a stub could:
(i) display a message that it has been called by the target unit;
(ii) do 1, and display any input parameters passed from the target unit;
(iii) do 1, 2, and pass back a result from a table;
(iv) do 1, 2, 3, and display result from table.
Drivers and stubs as shown in Figure 6.5 are developed as procedures and functions for traditional imperative-language based systems. For object-oriented systems, developing drivers and stubs often means the design and implementation of special classes to perform the required test-ing tasks. The test harness itself may be a hierarchy of classes. For ex-ample, in Figure 6.5 the driver for a procedural system may be designed as a single procedure or main module to call the unit under test; however, in an object-oriented system it may consist of several test classes to em-ulate all the classes that call for services in the class under test. Researchers such as Rangaraajan and Chen have developed tools that generate test cases using several different approaches, and classes of test harness objects to test object-oriented code [12,14].
The test planner must realize that, the higher the degree of function-ally for the harness, the more resources it will require to design, imple-ment, and test. Developers/testers will have to decide depending on the nature of the code under test, just how complex the test harness needs to be. Test harnesses for individual classes tend to be more complex than those needed for individual procedures and functions since the items being tested are more complex and there are more interactions to consider.
6 . 7
R u n n i n g t h e U n i t T e s t s a n d R e c o r d i n g R e s u l t sUnit tests can begin when (i) the units becomes available from the devel-opers (an estimation of availability is part of the test plan), (ii) the test cases have been designed and reviewed, and (iii) the test harness, and any other supplemental supporting tools, are available. The testers then pro-ceed to run the tests and record results. Chapter 7 will describe documents called test logs that can be used to record the results of specific tests. The status of the test efforts for a unit, and a summary of the test results, could be recorded in a simple format such as shown in Table 6.1. These forms can be included in the test summary report, and are of value at the weekly status meetings that are often used to monitor test progress.
It is very important for the tester at any level of testing to carefully record, review, and check test results. The tester must determine from the results whether the unit has passed or failed the test. If the test is failed, the nature of the problem should be recorded in what is sometimes called
U n i t T e s t W o r k s h e e t Unit Name:
Unit Identifier:
Tester:
Date:
Test case ID Status (run/not run) Summary of results Pass/fail
T A B L E 6 . 1
Summary work sheet for unit test results.
a test incident report (see Chapter 7). Differences from expected behavior should be described in detail. This gives clues to the developers to help them locate any faults. During testing the tester may determine that ad-ditional tests are required. For example, a tester may observe that a par-ticular coverage goal has not been achieved. The test set will have to be augmented and the test plan documents should reflect these changes.
When a unit fails a test there may be several reasons for the failure.
The most likely reason for the failure is a fault in the unit implementation (the code). Other likely causes that need to be carefully investigated by the tester are the following:
• a fault in the test case specification (the input or the output was not specified correctly);
• a fault in test procedure execution (the test should be rerun);
• a fault in the test environment (perhaps a database was not set up properly);
• a fault in the unit design (the code correctly adheres to the design specification, but the latter is incorrect).
The causes of the failure should be recorded in a test summary report, which is a summary of testing activities for all the units covered by the unit test plan.
Ideally, when a unit has been completely tested and finally passes all of the required tests it is ready for integration. Under some circumstances a unit may be given a conditional acceptance for integration test. This may occur when the unit fails some tests, but the impact of the failure is not significant with respect to its ability to function in a subsystem, and the availability of a unit is critical for integration test to proceed on sched-ule. This a risky procedure and testers should evaluate the risks involved.
Units with a conditional pass must eventually be repaired.
When testing of the units is complete, a test summary report should be prepared. This is a valuable document for the groups responsible for integration and system tests. It is also a valuable component of the project history. Its value lies in the useful data it provides for test process im-provement and defect prevention. Finally, the tester should insure that the test cases, test procedures, and test harnesses are preserved for future reuse.
6 . 8
I n t e g r a t i o n T e s t : G o a l sIntegration test for procedural code has two major goals:
(i) to detect defects that occur on the interfaces of units;
(ii) to assemble the individual units into working subsystems and finally a complete system that is ready for system test.
In unit test the testers attempt to detect defects that are related to the functionality and structure of the unit. There is some simple testing of unit interfaces when the units interact with drivers and stubs. However, the interfaces are more adequately tested during integration test when each unit is finally connected to a full and working implementation of those units it calls, and those that call it. As a consequence of this assembly or integration process, software subsystems and finally a completed sys-tem is put together during the integration test. The completed syssys-tem is then ready for system testing.
With a few minor exceptions, integration test should only be per-formed on units that have been reviewed and have successfully passed unit testing. A tester might believe erroneously that since a unit has
al-ready been tested during a unit test with a driver and stubs, it does not need to be retested in combination with other units during integration test. However, a unit tested in isolation may not have been tested ade-quately for the situation where it is combined with other modules. This is also a consequences of one of the testing axioms found in Chapter 4 called anticomposition [13].
Integration testing works best as an iterative process in procedural-oriented system. One unit at a time is integrated into a set of previously integrated modules which have passed a set of integration tests. The in-terfaces and functionally of the new unit in combination with the previ-ously integrated units is tested. When a subsystem is built from units integrated in this stepwise manner, then performance, security, and stress tests can be performed on this subsystem.
Integrating one unit at a time helps the testers in several ways. It keeps the number of new interfaces to be examined small, so tests can focus on these interfaces only. Experienced testers know that many defects occur at module interfaces. Another advantage is that the massive failures that often occur when multiple units are integrated at once is avoided. This approach also helps the developers; it allows defect search and repair to be confined to a small known number of components and interfaces.
Independent subsystems can be integrated in parallel as long as the re-quired units are available.
The integration process in object-oriented systems is driven by assem-bly of the classes into cooperating groups. The cooperating groups of classes are tested as a whole and then combined into higher-level groups.
Usually the simpler groups are tested first, and then combined to form higher-level groups until the system is assembled. This process will be described in the next sections of this chapter.
6 . 9
I n t e g r a t i o n S t r a t e g i e s f o r P r o c e d u r e s a n d F u n c t i o n sFor conventional procedural/functional-oriented systems there are two major integration strategies—top-down and bottom-up. In both of these strategies only one module at a time is added to the growing subsystem.
To plan the order of integration of the modules in such system a structure chart such as shown in Figure 6.6 is used.
M1
M2 M3 M4 M5
M6 M7 M8 M9 M10 M11
F i g . 6 . 6
Simple structure chart for integration test examples.
Structure charts, or call graphs as they are otherwise known, are used to guide integration. These charts show hierarchical calling relationships between modules. Each node, or rectangle in a structure chart, represents a module or unit, and the edges or lines between them represent calls between the units. In the simple chart in Figure 6.6 the rectangles M1–
M11 represent all the system modules. Again, a line or edge from an upper-level module to one below it indicates that the upper level module calls the lower module. Some annotated versions of structure charts show the parameters passed between the caller and called modules. Conditional calls and iterative calls may also be represented.
Bottom-up integration of the modules begins with testing the lowest-level modules, those at the bottom of the structure chart. These are mod-ules that do not call other modmod-ules. In the structure chart example these are modules M6, M7, M8, M9, M10, M11. Drivers are needed to test these modules. The next step is to integrate modules on the next upper level of the structure chart whose subordinate modules have already been tested. For example, if we have tested M6, M7, and M8, then we can select M2 and integrate it with M6, M7, and M8. The actual M2 replaces the drivers for these modules.
In the process for bottom-up integration after a module has been tested, its driver can be replaced by an actual module (the next one to be integrated). This next module to be integrated may also need a driver, and this will be the case until we reach the highest level of the structure chart. Accordingly we can integrate M9 with M3 when M9 is tested, and
M4 with M10 when M10 is tested, and finally M5 with M11 and M10 when they are both tested. Integration of the M2 and M3 subsystems can be done in parallel by two testers. The M4 and M5 subsystems have overlapping dependencies on M10. To complete the subsystem repre-sented by M5, both M10 and M11 will have to be tested and integrated.
M4 is only dependent on M10. A third tester could work on the M4 and M5 subsystems.
After this level of integration is completed, we can then move up a level and integrate the subsystem M2, M6, M7, and M8 with M1 when M2 has been completed tested with its subordinates, and driver. The same conditions hold for integrating the subsystems represented by M3, M9, M4, M10, and M5, M10, M11 with M1. In that way the system is finally integrated as a whole. In this example a particular sequence of integration has been selected. There are no firm rules for selecting which module to integrate next. However, a rule of thumb for bottom-up integration says that to be eligible for selection as the next candidate for integration, all of a module’s subordinate modules (modules it calls) must have been tested previously. Issues such as the complexity, mission, or safety criti-calness of a module also impact on the choices for the integration sequence.
Bottom-up integration has the advantage that the lower-level mod-ules are usually well tested early in the integration process. This is im-portant if these modules are candidates for reuse. However, the upper-level modules are tested later in the integration process and consequently may not be as well tested. If they are critical decision-makers or handle critical functions, this could be risky. In addition, with bottom-up inte-gration the system as a whole does not exist until the last module, in our example, M1, is integrated. It is possible to assemble small subsystems, and when they are shown to work properly the development team often experiences a boost in morale and a sense of achievement.
Top-down integration starts at the top of the module hierarchy. The rule of thumb for selecting candidates for the integration sequence says that when choosing a candidate module to be integrated next, at least one of the module’s superordinate (calling) modules must have been previ-ously tested. In our case, M1 is the highest-level module and we start the sequence by developing stubs to test it. In order to get a good upward flow of data into the system, the stubs may have to be fairly complex (see
Section 6.2.3). The next modules to be integrated are those for whom their superordinate modules has been tested. The way to proceed is to replace one-by-one each of the stubs of the superordinate module with a subordinate module. For our example in Figure 6.6, we begin top-down integration with module M1. We create four stubs to represent M2, M3, M4, and M5. When the tests are passed, then we replace the four stubs by the actual modules one at a time. The actual modules M2–M5 when they are integrated will have stubs of their own. Figure 6.7 shows the set up for the integration of M1 with M2.
When we have integrated the modules M2–M5, then we can integrate the lowest-level modules. For example, when, M2 has been integrated with M1 we can replace its stubs for M6, M7, and M8 with the actual modules, one at a time, and so on for M3, M4, and M5. One can traverse the structure chart and integrate the modules in a depth or breadth-first manner. For example, the order of integration for a depth-first approach would be M1, M2, M6, M7, M8, M3, M9, M4, M10, M5, M11.
Breadth-first would proceed as M1, M2, M3, M4, M5, M6, M7, M8, M9, M10, M11. Note that using the depth-first approach gradually forms subsystems as the integration progresses. In many cases these subsystems can be assembled and tested in parallel. For example, when the testing of M1 is completed, there could be parallel integration testing of subsystems M2 and M3. A third tester could work in parallel with these testers on the subsystems M4 and M5. The test planner should look for these op-portunities when scheduling testing tasks.
Top-down integration ensures that the upper-level modules are tested early in integration. If they are complex and need to be redesigned there will be more time to do so. This is not the case with bottom-up integra-tion. Top-down integration requires the development of complex stubs to drive significant data upward, but bottom-up integration requires driv-ers so there is not a clear-cut advantage with respect to developing test harnesses. In many cases a combination of approaches will be used. One approach is known as sandwich, where the higher-level modules are in-tegrated top-down and the lower-level modules inin-tegrated bottom-up.
No matter which integration strategy is selected, testers should con-sider applying relevant coverage criteria to the integration process. Lin-nenkugel and Mullerburg have suggested several interprocedural control and data flow–based criteria [19]. Example control flow criteria include:
M1
M2 Stub M3 Stub M4 Stub M5
Stub M6
Stub M7
Stub M8 F i g . 6 . 7
Top-down integration of modules M1 and M2.
all modules in the graph or chart should be executed at least once (all nodes covered), all calls should be executed at least once (all edges cov-ered), and all descending sequences of calls should be executed at least once (all paths covered).
The smart test planner takes into account risk factors associated with each of the modules, and plans the order of integration accordingly. Some modules and subsystems may be handling mission/safety/business critical functions; some might be complex. The test planner will want to be sure that these are assembled and tested early in the integration process to insure they are tested adequately. For example, in the sample structure chart shown in Figure 6.6, if modules M6 and M10 are complex and/or safety critical modules, a tester would consider bottom-up integration as a good choice since these modules would be integrated and tested early in the integration process.
Another area of concern for the planner is the availability of the mod-ules. The test planner should consult the project plan to determine avail-ability dates. Availavail-ability may also be affected during the testing process depending on the number and nature of the defects found in each module.
A subsystem may be assembled earlier/later then planned depending in the amount of time it takes to test/repair its component modules. For example, we may be planning to integrate branch M2 before branch M3;
however, M2 and its components may contain more defects then M3 and its components, so there will be a delay for repairs to be made.