• Tidak ada hasil yang ditemukan

Planning Risks and Contingencies

Dalam dokumen Systematic Software Testing (Halaman 66-69)

Now, let's go on the other side of risk management and take a look at planning risks, which are unscheduled events or late activities that occur which may jeopardize the testing

schedule. The purpose of this risk analysis is to determine the best contingencies in the event that one of the planning risks occurs. This is important because the scope and nature of a project almost always change as the project progresses. Most test managers find that during the planning phases, the users and developers are much more likely to sit down and make rational decisions on what to do if one of the planning risks occurs. If the decision is made in "the heat of battle" near the end of the project, emotions and politics are much more likely to be the primary drivers of the decision-making process.

Key Point

Planning risks are unscheduled events or late activities that may jeopardize the testing schedule.

Some common planning risks include:

Delivery dates Staff availability Budget

Environmental options Tool inventory

Acquisition schedule Participant buy-in Training needs Scope of testing Lack of requirements Risk assumptions Usage assumptions Resources

Feature creep Poor quality s/w

Most of us have taken part in projects where the schedule is at best ambitious and at worst impossible. Once an implementation date has been set, it's often considered sacred.

Customers may have been promised a product on a certain date, management's credibility

is on the line, corporate reputation is at stake, or competitors may be breathing down your neck. At the same time, as an organization, you may have stretched your resources to the limit. Planning risks are anything that adversely affects the planned testing effort. Perhaps the start of the project is slightly delayed, or a major software vendor releases a new version of the operating system that the application will run on. It's not our purpose here to address the many reasons why we so often find ourselves in this unenviable spot. Rather, we would like to talk about what can be done about it. Case Study 2-1 describes a common scenario in many organizations along with some possible contingencies.

Case Study 2-1: Suppose Jane Doe resigned and your ambitious schedule suddenly became impossible. What would you do?

The Deliverable Is the Date

Consider the following scenario. Your VP has promised the next release of your product on a certain date. The date seems very aggressive to you in light of the available resources and the need to release a high-quality product (after the last release failed spectacularly). Then, the unthinkable happens. Jane Doe takes a job with your competitor, leaving a huge gap in your company's knowledge base (or a key component is late, or the requirements change, or some other planning risk occurs).

What was once an ambitious schedule now appears to be impossible. What are your choices or contingencies?

1. Alter the schedule í which marketing says can't be done...

2. Reduce the scope í but we promised our customers...

3. Reduce quality, which usually means reduce testing or allow more defects in the final product í but our last release failed!!!

4. Add resources (including overtime) í but there are none to add and everyone is already working around the clock...

5. Punt...

Unfortunately, all of the choices listed above seem bad and, all too often,

management decides that they're all unacceptable. If management does not make proactive decisions during the planning stage, the technical staff will often end up making the choices by default. Initially, more resources may be added in the form of overtime. If this doesn't solve the problem, the team will begin to take shortcuts --eliminating a document here, a review there, or eliminating an entire set of tests. Of course, the quality suffers.

If the project is still in jeopardy, functionality that is not absolutely essential will be rescheduled for a later release or the date may be slipped. Eventually, the new target date may be met when a watered-down system of poor quality is delivered to the customer late, by a very frustrated development team. Sound familiar?

Identifying planning risks and contingencies helps you make intelligent, informed decisions.

Almost every project team can identify the planning risks that cause concern: late requirements, test environment problems, late delivery of software, etc. Our goal is to decide in advance what to do if one of these planning risks comes true. In our opinion, the only possible contingencies that exist are:

reduce the scope delay implementation add resources

reduce quality processes

Key Point

The major focus of the section Planning Risks and Contingencies in the IEEE Standard 829-1998 is on planning risks (as opposed to software risks).

Although not universally used, planning risks and contingencies are more commonly used than software risk analysis.

However, you may encounter many different "flavors" of these four contingencies, depending on your organization and the details of the project. For example, "add resources" might

mean overtime for the prime staff or it could mean bringing in additional testers. Case Study 2-2 lists some examples of planning risks and contingencies.

Case Study 2-2: Sample Planning Risk and Contingencies

Sample Planning Risk

The user introduces a major requirements change late in the software lifecycle.

Sample Contingency #1

Ask the user group to contribute more users to the testing effort (i.e., add more resources).

Sample Contingency #2

Decide not to implement a low-priority feature until a later release (e.g., reduce the scope).

Sample Contingency #3

Decide not to test (or at least to test less) some of the low-risk features identified in the course of the software risk analysis (i.e., reduce quality processes).

Case Study 2-3: Sample Planning Risk and Contingencies

Sample Planning Risk

The size of the project keeps growing í this is a double whammy. Not only do testing resources need to grow because of the increased size of the project, but productivity rates for software development and testing typically decrease as the size of the project increases.

Sample Contingency #1

Add resources (e.g., outsource, add users, add developers, authorize overtime).

Sample Contingency #2

Reduce the scope of the project. Choose a strategy of incremental delivery to the customer.

Sample Contingency #3

Reduce testing of some of the lower-risk modules (i.e., reduce quality processes).

Sample Contingency #4 Delay implementation.

As you can see, all of the contingencies in Case Studies 2-2 and 2-3 involve compromise.

But without planning risks and contingencies, the developers and testers are forced to make these choices on the fly. The software risk analysis and the analysis of the planning risks and contingencies work together. Recall our Automated Teller Machine (ATM) example from the previous section. The risk analysis process helped us identify the software risks, which, in turn, helped us focus and prioritize our testing effort in order to reduce those risks.

The planning risks help us to do the "What if…" and develop contingencies. For example, what if Jane Doe really does leave and her departure causes the software to be delivered to the test group late? One of the contingencies was to reduce quality (this usually means less testing). If this contingency befalls us, we would probably want to go back to the software risk analysis and consider reducing the testing of the least critical components (i.e., moving the cut line up). Refer to Step 9 of the Software Risk Process for information on the cut line.

It should be apparent at this point that planning risks, software risks, features/attributes to be tested, features/attributes not to be tested, and indeed the entire testing strategy are built around the concept of using risk to prioritize the testing effort.

Dalam dokumen Systematic Software Testing (Halaman 66-69)