• Tidak ada hasil yang ditemukan

Appendix: Computer Generation of Aggregated Spatial Patterns

The computer-generated spatial patterns that we refer to in this book were derived using one or other of two methods described in the literature: the Poisson cluster processes (see, e.g. Diggle, 1983) and Gibbs processes (see, e.g. Ripley, 1981). We present here a brief outline of how we have implemented them.

For many years statisticians have developed a wide variety of probability distribu- tions which are called compound Poisson distributions. The ones that are relevant here are those in which independent distributions of ‘parents’ and ‘offspring’ are specified. Our implementation uses the Poisson distribution for ‘parents’, and either a Poisson or a log-series (see, e.g. Pielou, 1977) distribution for the ‘offspring’. This is put together in four stages to create a spatial pattern:

1. A number, N, of ‘parents’ is distributed randomly in the unit square.

2. Each ‘parent’ gives rise to M ‘offspring’, M being generated from a Poisson or log-series distribution with predefined mean.

3. The positions of these ‘offspring’ relative to the corresponding ‘parent’ are gen- erated as bivariate normal with zero correlation.

4. Points generated outside the unit square are ignored.

Gibbs processes use an initial spatial pattern of points and derive, by iteration, a new spatial pattern. The goal is to adjust the initial pattern (by repeated rejection and replacement) of points, so that in the final pattern the distribution of the number of neighbouring points of each point conforms to a prespecified probability distribution. To implement this, a definition of ‘neighbouring point’ and rules for rejection and replacement are required. We start with a random spatial pattern of Npoints. Points are defined as neighbours of a given point if they are within a circle of radius rcentred on that point. The rejection criterion is that new points are rejected with probability proportional to the number of their neighbours.

1. Start with a random spatial pattern of Npoints in the unit square.

2. Choose one of the Npoints at random to be replaced.

3. Generate a potential replacement point at random.

4. Calculate the number of neighbours, t, of the new point.

5. Generate a uniform random number, x, between 0 and 1.

6. If x> t/Naccept the new point; otherwise, return to step 3.

7. Repeat steps 2–6 a number of times. Using a slightly different rejection crite- rion, Ripley (1979) suggested that 4Ntimes should be sufficient.

Faddy (1997) demonstrated how various probability distributions arise from

inhomogeneous Poisson processes in the time domain. His definition of the nega- tive binomial distribution as an inhomogeneous Poisson process in time is similar to how the above method generates spatial patterns (his definition was our motiva- tion). When a grid is superimposed on a spatial pattern generated by this method to define sample units, then, in many instances, the frequency distribution of num- bers of points in sample units is either Poisson or negative binomial. However, there are sufficient differences between a time dimension and two-dimensional space (in particular, time is an ordered dimension) that a negative binomial distrib- ution is not guaranteed for these spatial patterns (Exhibit 4.2).

The generation of spatial patterns by the Gibbs process method tends to be much slower than by the Poisson cluster method. The computer-generated patterns referred to in this chapter were derived using the Gibbs method. Most of the com- puter-generated patterns referred to in subsequent chapters were derived by the Poisson method.

Distributions 95

In this chapter we present three methods for sequential classification of pest abun- dance or pest incidence. With such methods, the need for further sample informa- tion is evaluated after each sample unit is examined by making one of three possible decisions: (i) the density is classified as greater than the critical density (cd), and sampling is stopped; (ii) the density is classified as less than or equal to cd and sampling is stopped; or (iii) an additional sample unit is taken and examined.

A classification ‘greater than the critical density’ will probably trigger some kind of intervention. The decisions are reached by comparing total counts to stop bound- aries. The three sequential methods differ in the shape of the stop boundaries pro- duced and the parameters used to specify the boundaries. We describe each of the methods and illustrate how various parameters influence the operating characteris- tic (OC) and average sample number (ASN) functions for each. The OC and ASN functions are estimated using simulation. Finally, we compare the three procedures in reference with each other.

In Chapter 3 we introduced the notion of examining a batch of sample units and deciding whether, on the basis of the data so far, it was possible to come to a con- clusion, or whether another batch of sample units was required. The idea of reduc- ing the batch size to one sample unit was suggested, but there we ran up against the assumption of normality; there are few, if any, pest counts that can be described by a normal distribution, whatever the sample unit. If we had nevertheless proceeded to calculate OC and ASN functions (not difficult), these functions might have been misleading. The probability distributions described in Chapter 4 come much nearer to describing actual counts of pests, and, in any particular instance, one among them is usually good enough to give a reliable representation. We will make