Filters
Another 3 3 smoothing filter
5.5 Implementing Filters
5Filters
Fig. 5.21 Weighted median example.
Each pixel value is inserted into the extended pixel vec- tor multiple times, as spec- ified by the weight matrix W. For example, the value 0 from the center pixel is inserted three times (since W(0,0) = 3) and the pixel value 7 twice. The pixel vector is sorted and the center value (2) is taken as the median.
37
21
0 95
8 0 3 7 2
1 0 0 9 5 8
1 2 1 2 3 2 1 2 1
7
00
0
5 1
3
7 1 0
9 5
8 0
7 00
0
5 12
a0
an−1
an+1
a2n an= weighted median I(u, v)
W(i, j)
A
Sort
ear” refers to anything that is not linear, there are a multitude of filters that fall into this category, including the morphological filters for binary and grayscale images, which are discussed in Ch. 9. Other types of nonlinear filters, such as the corner detector described in Ch. 7, are often described algorithmically and thus defy a simple, compact description.
In contrast to the linear case, there is usually no “strong theory”
for nonlinear filters that could, for example, describe the relationship between the sum of two images and the results of a median filter, as does Eqn. (5.23) for linear convolution. Similarly, not much (if anything) can be stated in general about the effects of nonlinear filters in frequency space.
5.5Implementing Filters
these are executed most often. This applies especially to “expensive”
instructions, such as method invocations, which may be relatively time-consuming.
In the examples, we have intentionally used the ImageJ standard methodsgetPixel()for reading and putPixel()for writing image pixels, which is the simplest and safest approach to accessing image data but also the slowest, of course. Substantial speed can be gained by using the quicker read and write methodsget()andset()defined for class ImageProcessorand its subclasses. Note, however, that these methods do not check if the passed image coordinates are valid.
Maximum performance can be obtained by accessing the pixel arrays directly.
5.5.2 Handling Image Borders
As mentioned briefly in Sec. 5.2.2, the image borders require special attention in most filter implementations. We have argued that the- oretically no filter results can be computed at positions where the filter matrix is not fully contained in the image array. Thus any filter operation would reduce the size of the resulting image, which is not acceptable in most applications. While no formally correct remedy exists, there are several more or less practical methods for handling the remaining border regions:
Method 1: Set the unprocessed pixels at the borders to some con- stant value (e.g., “black”). This is certainly the simplest method, but not acceptable in many situations because the image size is incrementally reduced by every filter operation.
Method 2: Set the unprocessed pixels to the original (unfiltered) image values. Usually the results are unacceptable, too, due to the noticeable difference between filtered and unprocessed image parts.
Method 3: Expand the image by “padding” additional pixels around it and apply the filter to the border regions as well. Fig.
5.22shows different options for padding images.
A. The pixels outside the image have a constant value (e.g.,
“black” or“gray”, seeFig. 5.22(a)). This may produce strong artifacts at the image borders, particularly when large filters are used.
B. Theborder pixels extend beyond the image boundaries (Fig.
5.22(b)). Only minor artifacts can be expected at the bor- ders. The method is also simple to compute and is thus often considered the method of choice.
C. The image is mirrored at each of its four boundaries (Fig.
5.22(c)). The results will be similar to those of the previous method unless very large filters are used.
D. Theimage repeats periodically in the horizontal and vertical directions (Fig. 5.22(d)). This may seem strange at first, and the results are generally not satisfactory. However, in discrete spectral analysis, the image is implicitly treated as a periodic function, too (see Ch. 18). Thus, if the image is filtered in the frequency domain, the results will be equal to filtering in the space domain under this repetitive model.
113
5Filters
Fig. 5.22 Methods for padding the im- age to facilitate filtering along the borders. The assump- tion is that the (nonexist- ing) pixels outside the orig- inal image are either set to some constant value (a), take on the value of the closest border pixel (b), are mir- rored at the image bound- aries (c), or repeat periodically along the coordinate axes (d).
(a)
(b) (c)
(d) (e)
None of these methods is perfect and, as usual, the right choice de- pends upon the type of image and the filter applied. Notice also that the special treatment of the image borders may sometimes require more programming effort (and computing time) than the processing of the interior image.
5.5.3 Debugging Filter Programs
Experience shows that programming errors can hardly ever be avoided, even by experienced practitioners. Unless errors occur during execu- tion (usually caused by trying to access nonexistent array elements), filter programs always “do something” to the image that may be sim- ilar but not identical to the expected result. To assure that the code operates correctly, it is not advisable to start with full, large images but first to experiment with small test cases for which the outcome can easily be predicted. Particularly when implementing linear fil- ters, a first “litmus test” should always be to inspect the impulse response of the filter (as described in Sec. 5.3.4) before processing any real images.
114