H. The Atlas Is the Database
III. The Concept of Resolution
Resolution can be simply defined in operational terms.
Resolution is the ability to distinguish two entities as sepa- rate and distinct. This simple definition becomes more complex when applied to real-world situations, as will become obvious below in the discussion of spatial resolu- tion. Nevertheless, the basic rules apply and can be consist- ently used when considering the concept of resolution.
A. Spatial Resolution
Using the operational definition, spatial resolution is the ability of a method to distinguish two separate objects that are positioned close to one another (Fig. 4). This depends on both the intrinsic resolving power of the instrument in ques- tion as well as the environment in which the objects exist and the display parameters for the resultant data set. For example, an individual with 20/20 vision should be able to resolve two average-size people walking side by side at a distance of 20 to 30 feet. However, under low light condi- tions, fog, or other adverse visual environmental factors, the perception may be that there is only one large individual. If conditions are very poor neither individual may be seen at all. Similarly, if an observer looks at two dots on a video monitor that can easily be distinguished when the monitor is properly adjusted, they may merge into one hazy cloud if the monitor is out of focus (Fig. 4). Thus, one sees that the concept of spatial resolution and detection can be separated into the intrinsic resolving power of the instrument and the environmental and display factors that affect contrast, focus, and other detection factors.
1. Intrinsic Resolution
A thorough understanding of intrinsic resolution is important. The intrinsic resolution of an imaging system is
determined by the effective resolution of individual detec- tors in the system. One approach to its measurement is to use a sharply defined object with high contrast to the back- ground and move it parallel to the face of a detector in the system. The result is called the line-spread function and it is a measure of the intrinsic resolution of the detector. The line-spread function is a Gaussian curve, the width of which at half its maximal height [full width half-maximum (FWHM)] is usually given as a measure of the intrinsic res- olution of a detector. This type of measurement defines the limit of spatial resolution of a particular detection system.
2. Signal-to-Noise Factors
The signal-to-noise ratio (S/N) is a measure of error caused by statistical inaccuracies of data as well as instru- ment errors. A measure of the statistical accuracy of data from an imaging device is defined as the reproducibility of the measurement and can be expressed in terms of standard deviation or variance of a series of identical measurements.
A common way to express S/N for an imaging system is in terms of the average standard deviation of a pixel value over a series of measurements from a uniform phantom signal (Hoffman and Phelps, 1986). These measurements are defined over a specific number of accumulated imaging ses- sions in a specified area in the center of the object of inter- est. Systems with the maximal S/N will provide the highest accuracy and most reproducible results. In general, such systems provide optimal image resolution for a given intrin- sic resolution. Many factors contribute to S/N for imaging systems and these factors vary from imaging device to imaging device. As such, a discussion of the S/N character- istics of different imaging techniques can be found in the appropriate chapter of this text.
3. Image Resolution
Image resolution is affected by sampling, both linear and angular; by the fineness of the grid in the final image display (pixel dimension); and by the degree of spatial smoothing during or following the imaging process; as well as by the hardware design of the overall system (Hoffman and Phelps, 1986). Differences between system resolution and intrinsic resolution provide the measure of how well the entire imaging system is working as a unit (Hoffman and Phelps, 1986). The choice of the object used to identify image reso- lution can have a significant effect on the resultant answer.
If a single object of high contrast is imaged, there will be minimal background activity to contribute to noise in the measurement (Fig. 4). In addition, minimal smoothing and filtering of the data, in a manner that maximizes the identification of single objects, will produce a higher spatial resolution value than one would actually realize in practice with a complex and heterogeneous neuroanatomical struc- ture. The latter, being a distributed image typical of in vivo situations found in the nervous system, adds background
noise to all points in the image. Thus, the signal-to-noise environment is much worse in a distributed source and noise reduction in the system can be achieved only by increasing the total number of measurements or by spatial averaging/
smoothing of the data (Hoffman and Phelps, 1986). The greater the spatial averaging, the smoother and less noisy the appearance of the image. This occurs, however, at the expense of spatial resolution. When resolution is critical in noisy image sets, smoothing the image may provide bound- ary information and the calculation of quantitative informa- tion can then be performed on the original unsmoothed data set in a two-step process.
4. Axial Resolution
Axial resolution pertains to techniques that involve slabs or slices of tissue, irrespective of whether they are obtained in vivo and noninvasively, as would be the case with tomo- graphic techniques such as CT, MRI, PET, or SPECT, or by physical sectioning of tissue the axial thickness of which is determined by the interval movements of the blade relative to the tissue block. Axial resolution in tomographic imaging contributes greatly to the overall three-dimensional spatial resolution since spatial averaging occurs in three dimen- sions in tomographic techniques as opposed to physical sec- tioning of the data (Hoffman et al., 1979). A more complete discussion of this issue is provided under Partial Volume Effects.
5. Modulation Transfer Function
While the line-spread function can be used to determine detector spatial resolution, the modulation-transfer function (MTF) can be used to define system performance of an imaging device in terms of the ability to measure spatial fre- quencies. This is typically obtained by scanning phantoms consisting of alternating bars of high and low signal. The spatial frequency (cycle/distance) of a bar phantom with bar widths of N is 1 cycle/2N (i.e., the width of a pair of bars).
Thus, a 5-mm bar phantom has a basic spatial frequency of 1 cycle/cm. A qualitative rule of thumb in imaging is that the resolution of a bar phantom requires a FWHM resolution approximately equal to 1.4–2.0 times the width of the bars (Hoffman and Phelps, 1986). For example, a system with a 10-mm image resolution should be able to resolve 5- to 7-mm bar phantoms.
The MTF is an imaging analog of the frequency response curve used to evaluate audio equipment. In audio equipment evaluations, pure tones of various frequencies serve as input for the component to be tested and the relative amplitude of the output signal is measured. The comparison between the amplitudes of the input and output signals as a function of different frequencies defines how much of each frequency will pass through the system to the listener. Similarly, the MTF of an imaging system measures the fraction of the amplitude of each spatial frequency that can be transferred
from the object to the final image. The MTF can be calcu- lated from the line-spread function by
MTF (υ ) = ∫A (x)cos (2πx)dx/∫A(x)dx, (1) where A(x) is the magnitude of the line-spread function at a distance x from the origin of the line-spread function co- ordinate system, and υ is the spatial frequency in units of cycles/distance (Hoffman and Phelps, 1986). The limits of integration are usually over the full field of view of the imaging system but at least 10 times the FWHM. The result is a function that gives the fraction of signal amplitude the system will transfer to the image at each spatial frequency.
If the MTF is very low at a given spatial frequency, it is most likely that data from the system with that frequency are caused by statistical noise in the data set. The highest fre- quency one can hope to reliably measure is 1 (2 ×sampling distance). In any imaging system, the overall MTF is the result of the MTFs of the components of the imaging chain, such as the MTF corresponding to the intrinsic resolution of a single detector, the MTF due to linear sampling, etc. Each step has an image degradation factor and any loss in the MTF for a given step can never be recovered (Hoffman and Phelps, 1986).
6. Partial Volume Effects
All imaging techniques are subject to partial volume effects. Simply defined, partial volume effects are three- dimensional averaging that takes place within the volume defined by the spatial resolution of the system. For tomo- graphic techniques such as CT, MRI, PET, and SPECT, the term partial volume has come to represent the diminished contrast of an object that only partially occupies the thick- ness of the tomographic plane. Since most imaging devices today utilize voxels whose axial dimensions exceed their transverse dimensions, axial partial volume effects are more pronounced and, therefore, more widely recognized.
Transverse partial volume effects also can be significant, particularly when the volume sampled by a detector in the transverse direction is large relative to the structure size (Hoffman et al., 1979; Mazziotta et al., 1981). Such a situ- ation exists in in vivo human tomographic imaging, in EEG, and at the spatial limits of autoradiography (Lear, 1986;
Lear et al., 1983, 1984). Since the central nervous system consists of a composite of heterogeneous tissues, the three- dimensional structure size relative to the three-dimensional resolution of the imaging instrument provides an estimate of partial volume effects. That is to say, if a structure of inter- est is small relative to the voxel (three-dimensional volume element) of resolution of the imaging device, then the quan- titative values for and the appearance of that object will be blurred due to averaging of the object with its surrounding structures. When the object of interest is large relative to the size of the resolution voxel of the imaging instrument then it is seen clearly and its quantitative values will be accu-
rately measured to the limits of precision of the imaging device. When an object of interest has at least one dimension smaller than the width of the line-spread function of the imaging system (or approximately two times the FWHM), that object will only partially occupy the sensitive volume of the detectors viewing that dimension. As a result, there is an underestimation of the signal from the object. In general, the object must be approximately two times the FWHM of the imaging system in all dimensions in order to be accurately visualized and quantified (Hoffman and Phelps, 1986;
Mazziotta et al., 1981).
Another factor exists with regard to large objects. When the border of one object with another falls within the volume of a single resolution voxel, averaging will occur between the object of interest and its neighbor on the other side of the boundary. Once again, this is an opportunity for partial volume effects with resultant blurring of the object edges. A typical example would be the boundary of the ventricle with the caudate nucleus in human in vivo tomographic imaging.
The smaller the voxel of resolution of the imaging device, the sharper such borders will be. The issue of orientation of objects relative to the position and placement of the resolu- tion voxels of the imaging device is also important. If a given subject is scanned in two different positions relative to the imaging device in, for example, a PET instrument, dif- ferent results may be obtained along borders because differ- ing relationships and varying partial volume effects will occur in the boundary voxels along that border (Mazziotta et al., 1981). Thus, for the most consistent and reproducible imaging results, the positioning of the object (e.g., subject) and the orientation of the imaging device (e.g., scanner gantry) should be consistent between imaging sessions to get the most accurate and, therefore, reproducible measure- ments across the entire sampling volume. For imaging devices with uniform and approximately Gaussian line- spread functions, the partial volume effects will simply scale with image resolution. Hoffman et al. (1979) have defined a term known as the recovery coefficient, originally defined for use with PET systems, that refers to the ratio of the apparent-to-true signal. For a bar phantom, the recovery coefficient can be calculated from the image line-spread function and the narrow dimension of the bar. The recovery coefficient is equal to the ratio of the area of the line-spread function overlapped by the bar to the total area of the line- spread function. Within the image plane one can assume the object always overlaps the center of the line-spread func- tion. However, in the axial dimension the overlap can be in any part of the line-spread function and this factor must be considered in data interpretation. If the shape of the object is simple (i.e., cubic) the recovery coefficient of the object can be estimated as the product of the recovery coefficients of its three dimensions (Hoffman et al., 1979). Thus, if the imaging system has an image resolution of 1 cm FWHM in all three dimensions, a 1-cm cube would have recovery
coefficient of (0.75)3= 0.42. That is, only 42% of the signal from the object would be recorded in the final image.
In the brain, the shapes of various structures are not regular, making precise quantitation difficult in most instances. For structures smaller than two times the FWHM of the imaging system in each dimension, there will be spillover of activity from objects with high signals to those with low signals. This results in averaging and loss of spatial resolution (Mazziotta et al., 1981).
7. Resolution Uniformity
Ideally, one would like all components of an imaging system to have equal resolution, thereby defining a single value for the entire system. In practice, this is often not true (Hoffman et al., 1982). For example, in tomographic imaging, particularly emission tomography, such as PET and SPECT, spatial resolution is maximal in the center of the field of view and diminishes as a function of distance away from the center (Hoffman et al., 1982). The discussion of resolution uniformity is unique to each type of imaging technique and, as such, the reader is referred to the specific techniques defined in other chapters of this text. Nevertheless, it suffices to say that image resolution cannot always be considered to be uniform and the investigator is well advised to know the spatial resolution at each point in the effective field of view of the imaging device in order to properly interpret data.
8. Object Movement
Loss of image resolution results from object movement in the field of view. This may result from gross movements of a subject’s head in a tomographic imaging device or physiologic movement of the tissue under the field of view, as might occur with respiratory or cardiac variations in intrinsic signal imaging. Imaging approaches that employ postmortem tissue, such as autoradiography or light and electron microscopy, do not suffer from this artifact unless it is investigator induced.
For the situation in which the subject is voluntarily moving in the scanner, the primary remedies to the problem are a specially designed head restraint system (Mazziotta et al., 1982) and very short scan times. In practice, no head restraint system short of rigid immobilization of the skull to the scanner, as is sometimes used in patients with stereotac- tic frames bolted to their skulls, can result in absolute immo- bilization of the subject. Many head restraint devices have been described in the literature but none of these absolutely limit head movement. The use of extremely short scan times minimizes the opportunity for movement during the interval in which image data are being collected. This has proven to be very effective for in vivo tomographic techniques such as CT, for which scan times have declined dramatically over the past 2 decades from minutes to seconds. A final approach to control for voluntary movement is to track the movement by an external sensing device and then correct for the move-
ment or discard data that occurred during the movement. A serious problem with correcting data for movement was mentioned above, namely, nonuniformity of resolution. That is, if a subject moves from one position to another in an imaging device with nonuniform resolution, different values will be obtained for the same structure when they are imaged in a different location in the field of view. In PET devices this is due to the fact that image resolution deterio- rates away from the center of the field of view. In MRI it can occur because of inhomogeneities in the magnetic field.
Respiratory and cardiac motion are also difficult problems.
In the case of optical intrinsic signal imaging, monitoring the respiratory and cardiac cycle is typically employed (Toga et al., 1995). Since this type of imaging occurs intraopera- tively (in both humans and animal models), ventilation can be momentarily suspended during the course of this short meas- urement. In addition, gating with the cardiac cycle can help to minimize physiological motion artifacts. More recently it has become clear that physiologic motion contributes to artifacts that occur in fMRI data sets (Kwong et al., 1992). Factors of importance in this situation include physical motion of the brain related to the cardiac and respiratory cycles as well as potential differences in oxygenation and cerebral blood volume that vary with these cycles.