Figure 5-2: Process for Creating an Inventory
Step 1: Gather Reference Materials
The first step in creating an inventory is to gather all of the relevant documentation that you can find about the system. These may include:
requirements documentation design documentation
user's manuals
product specifications functional specifications government regulations training manuals
customer feedback
Step 2: Form a Brainstorming Team
The brainstorming team should ideally be made up of three or four subject-matter experts (but probably not more than seven or eight). Systems and business expertise are the two most sought areas of experience. Good brainstormers might include developers, testers, users, customers, business analysts, user representatives, system architects, and
marketing representatives. The key is to get the people with the most knowledge of the business application and the systemic nature of the application (for existing systems). Our
team might look like this:
Cheryl Test Manager Crissy Systems Architect Stefan Senior Developer Rayanne Business Analyst
Erika Marketing Representative
Step 3: Determine Test Objectives
The idea behind the brainstorming session is to create lists of things to test. It's important not to scrutinize the list too closely up front and equally important not to get too detailed. In fact, we recommend that the team first just brainstorm the inventory topics (i.e., objectives).
Examples of common requirements objectives include:
Functions or methods Constraints or limits System configurations
Interfaces with other systems
Conditions on input and output attributes
Conditions of system/object memory (i.e., states that affect processing)
Behavior rules linking input and memory conditions (i.e., object states) to resultant functions
Critical usage and operational scenarios
Anything else to worry about, based on an external analysis of the system Many other test objectives exist, and there will always be some that are unique to a particular system, but the above list gives you an idea of the kinds of things we're looking for. Another word of caution: don't be concerned about the overlap in the objectives or inventories. Remember, we're trying to determine what's possible to test by looking at the system from many viewpoints. We'll worry about eliminating redundancy in future steps.
The following list shows some of the test objectives that we compiled for an insurance company:
Requirements Features Screens
Error Messages Transaction Types
Customers
States (geographical) Type of Policy
Type of Vehicle States (Effective)
Step 4: Prioritize Objectives
Once the high-level objectives have been determined, it's time to prioritize them. Normally we prioritize the objectives based on scope (i.e., breadth) and risk. It's always desirable to choose, as the highest priority, an objective (and its associated inventory) that has broad coverage of the system. Often, that will turn out to be the inventory of features, customer types, the requirements specification itself, or some other similar broad category. In our insurance
company example, we took the requirements document and features as our two highest-priority objectives since they had the broadest scope.
Step 5: Parse Objectives into Lists
The next step in creating an inventory is to parse the objectives into lists (inventories). You should start with the highest-priority objectives and parse them into more detailed components.
Lower-priority objectives will be parsed into more detail when, and if, time allows. The objective features, for example, can be parsed into the following inventory:
Write a policy Add a driver Add a car Submit a claim
Change address (same locale) Change address (different locale) Submit a bill
Amend a policy Amend a bill
Later, if more time permits, the inventory could be expanded to a finer level of granularity:
Write a policy
Commercial Individual High-risk Stated Value Add a driver
Under 16
Over 16, under 65 Male
Female
Driving School Record
Good Bad Add a Car
Type
SUV Sports Pickup Security Devices
Club Alarm
Tracking Device Garaged?
Etc.
Obviously, this inventory could be broken down even further. We recommend that you initially not try to make them too detailed, because creating the test cases can be overwhelming. If time allows, additional detail can always be added later.
Step 6: Create an Inventory Tracking Matrix
To create the matrix, list the objectives and their corresponding inventories down the left column of Table 5-1, starting with the number 1 priority objective, and then the number 2 objective, and so forth. Then, place any existing test cases from previous releases and testing efforts
horizontally across the top of the table. This process of mapping existing test cases to the inventories is known as calibration because we are calibrating the test cases against a "known"
entity (i.e., the inventories). If you think you have a pretty good set of test cases, but have never calibrated them, we believe most of you will be surprised to find that the coverage of your tests is not nearly as great as you might have imagined.
Table 5-1: Inventory Tracking Matrix
Notice that the first objective on our list is the requirements specification. The mapping of this inventory to the test cases is known as requirements traceability (shown in Table 5-1), which is a preferred practice of virtually every testing methodology. Notice that we've gone beyond just tracing to the requirements specification and have traced to the entire set of inventories. Most people will find that even if they have a good set of requirements, the additional inventories will identify many test scenarios that were "missed." Also note that one test case can cover multiple inventories. In Table 5-1, for example, Test Case #1 covers both Requirement 2 and Feature 4.
This also demonstrates how the matrix can help reveal redundancies in the inventories and the test cases.
Key Point
Most people will find that even if they have a good set of requirements, the additional inventories will identify many test scenarios that were "missed."
Step 7: Identify Tests for Unaddressed Conditions
In Table 5-2, you can see that existing test cases cover Requirements 2 and 3, and Features 1, 3, and 4. However, Requirement 1 and Feature 2 (refer to shaded rows) are not covered by any test. Therefore, it's necessary to create a test case or cases to cover these inventory items.
Table 5-2: Inventory Tracking Matrix
Objectives/Inventories Test Cases
TC#1 TC#2 TC#3 TC#4 TC#5 TC#6 TC#7 Requirements
Requirement 1
Requirement 2 X X X
Requirement 3 X X X
Features
Feature 1 X
Feature 2
Feature 3 X
Feature 4 X X X
Objective A Objective B Objective C
NOTE: Objectives A, B, C, and so on may be other common objectives such as interfaces, configurations, etc. or they may be application-specific objectives.
In Table 5-3, notice that Requirement 1 is now covered by Test Case #1. It was possible to modify Test Case #1 to cover Requirement 1 and still cover Requirement 2 and Feature 4. It wasn't possible to modify an existing test case to cover Feature 2, so Test Case #8 was added and, later, Test Case #9 was added because we felt that Test Case #8 didn't adequately test Feature 2 by itself.
Table 5-3: Inventory Tracking Matrix
Objectives/Inventories Test Cases
TC#1 TC#2 TC#3 TC#4 TC#5 TC#6 TC#7 TC#8 TC#9 Requirements
Requirement 1 X
Requirement 2 X X X
Requirement 3 X X X
Features
Feature 1 X
Feature 2 X X
Feature 3 X
Feature 4 X X X
Objective A Objective B Objective C
NOTE: Objectives A, B, C, and so on may be other common objectives such as interfaces, configurations, etc. or they may be application-specific objectives.
Rather than modify existing test cases, it's frequently easier to add new test cases to address untested conditions. Testers also have to be careful about making any one test case cover too many conditions. If the test fails, or has to be modified, it will possibly invalidate the testing of other conditions.
Step 8: Evaluate Each Inventory Item
Evaluate each inventory item for adequacy of coverage and add additional test cases as
required – remember that this process will never truly be complete. The testers must use their experience and exercise their judgment to determine if the existing tests for each condition are adequate. For example, in Table 5-3 (above), we see that Requirement 1 is covered by test case #1. Does that one test case adequately cover Requirement #1? If not, Requirement 1 will have to be parsed into greater detail or more test cases will have to be created.
Case Study 5-1: These creative testers used ad hoc testing techniques to help evaluate their systematic testing process.
Bug Parties
I once had a student from a well-known company who said they used a similar process in their group. Testers were committed to developing and maintaining a systematic set of test cases. Testers also recognized, though, the value of creative or ad hoc testing, so they conducted something which they called a "bug party," every other Friday. At these bug
parties, all test cases were thrown out for the day and the testers (and anyone else that wanted to "play") were urged to look for bugs. Prizes were awarded for the most bugs found, the biggest bug, and the most creative testing technique. The whole thing was a huge morale booster and resulted in finding many bugs, some of which were significant.
But finding bugs, as important as it was, was not the purpose of the party. You see, they then wrote the test case that would have found the bug, which improved the coverage of their existing test set. But that wasn't the real reason they had the bug parties either. What they were really looking for were entire categories or lists (inventories) of things that they forgot to test. How interesting, they were using ad hoc testing techniques to validate the effectiveness of their systematic testing. Who says testers are not creative!
— Rick Craig
Step 9: Maintain the Testing Matrix
As the system matures and changes, so too should the testing matrix. The testing matrix is a reusable artifact that is particularly valuable in determining what regression tests to maintain and execute for any given release (at least which ones to begin with). The testing matrix is also a valuable tool to help in the configuration management of the test cases, since it helps relate the tests to the system itself. The maintenance of the matrix is a huge undertaking, but without it, the testers must virtually start over with the development of their tests for each new release.
Not only is that a waste of time, but there's always the risk that some great test that was created for a previous release will not be remembered for this one.
Commercial tools are available to help document and maintain the inventories and test cases, but the effort required to maintain the testing matrix is still significant.