• Tidak ada hasil yang ditemukan

Lens

Dalam dokumen List of Tables Page (Halaman 64-73)

2.2 Signal Processing of the large-area iQID

2.2.5 Lens

Double lens or tandem lens are coupled between the microchannel plate’s phosphor and CMOS readout device. The double lens receives intensified photons collected on the phosphor screen and passes the image to the CMOS. Imaging with a single lens of focal length (f) consists of an object located at a distance (g) and a detector at an image distance (b). The relationship between the image and object distances can be related at first order by a well-known Equation 14 below.

1 𝑓= 1

𝑏+ 1

𝑔 Equation 14

The phosphor screen’s transmission efficiency to the readout device for a single lens coupling system is calculated based on Equation 15 below where d is the lens diameter.

Ξ—= 1

4π‘˜2(1+𝛽)2+𝛽2 Equation 15

where

𝛽 =𝑏

𝑔 ; π‘˜ =𝑓

𝑑

Imaging with a single high aperture 50 mm lens with F1.0 leads to just 5.9% transmission efficiency for a 1:1 imaging ratio. Figure 13 shows a schematic of a lens imaging system.

Figure 13: Schematic design of single lens optical imaging system with from a phosphor screen output window to CMOS readout device [53].

On the contrary, a tandem lens system consists of an imaging path from the first lens (L1 = collimator lens) to the second lens (L2 = imaging lens) to infinity, as shown in Figure 14. A parallel bunch of photons originates from emitted photons emitted at L1. Focused on infinity, the L2 focuses the parallel bunch of photons to a single image point in the focal plane of the readout device’s location.

Figure 14: Schematic of tandem lens imaging system from the phosphor screen on the MCP to the CMOS readout device as in the large-area IQID [53].

The image intensifier's phosphor screen has a glass output so that the lens can focus on the phosphor plane through the glass. Therefore, the transmission efficiency of a tandem lens system can be improved to 31.2% using the Equation 16 below with a tandem lens system of an F1.5 collimator lens of 100 mm focal length and F0.85 imaging lens of 53 mm focal length with d1 and d2 being the aperture of L1 and L2.

Ξ·= 1

4π‘˜2𝛽2+𝛽2π‘₯ 𝑑22

𝑑12 Equation 16

where

𝛽 =𝑓2

𝑓1 ; π‘˜ = 𝑓1

𝑑1

In other words, the optical signal conversion between a tandem lens imaging is optimized for a single imaging task within a narrow spectral range from the focal plane to infinity from infinity and to another focal plane. Adapting the proper imaging scale between the phosphor screen diameter and its image on the detector is ruled by the ratio of the two lenses' focal lengths. This optimization promotes high transmission efficiency and enhanced image quality with minimal signal contamination, such as imaging artifacts [53].

2.2.6 CCD/CMOS Sensor

The complementary metal-oxide-semiconductor (CMOS) camera is the hardware component related to the phosphor screen's brightness during illumination for the desired exposure time. A multi- tandem lens system optically couples intensified scintillation events from the image intensifier's

phosphor output to the CMOS camera. The shutter ratio's exposure time is how long the shutter is open for constant and continuous illumination at -180 V(ON) until switching to +80V (OFF).

The CMOS sensor's performance is quantified by image resolution, dynamic range, frame rate, and count-rate capability. Some promising CMOS technology can be enhanced to create intensified image data rates more significant than the capabilities of CCD sensors in terms of sensitivity, resolution, speed, and dynamic range. The size of the CMOS image sensor is the effective area chosen to capture the image intensifier's phosphor screen output that dictates the imaging path scale of the multi-tandem lens system [53].

The 40 mm image intensifier in the large-area iQID is optically coupled to a 2448 x 2048 CMOS sensor (GS3-U3-51S5M-C, SN 15405699) that can achieve up to 75 frames per second (FPS) with a shutter up to 4 seconds. Based on the CMOS frame rate and source activity, multiple events counted as one event (spatial pileup) can occur within an image frame of the CMOS sensor. Therefore, the CMOS frame rate can be adjusted to minimize spatial pileup so that only events that interact within the same temporal window are collected on the image frame. This chosen frame rate is 20 FPS. Chapter 4 will investigate spatial pileup at higher frame rates and longer shutter settings.

The timing limitation of the CMOS sensor is the shutter or aperture window/time. The CMOS sensor in the large-area iQID has a global shutter mechanism in which each pixel is exposed to light simultaneously to ensure light collection starts and ends simultaneously for all pixels in the CMOS imaging sensor. All pixels’ exposure stops simultaneously and transfers its charge to a non-

photosensitive transistor while holds at the end of each row. Digitization occurs as analog-to-digital conversion (ADC) clocks through each sensor to digitize each row individually. This digitization decreases the detection of overlapped events.

On the contrary, delayed digitization also decreases the frame rate in half. A global clear eliminates any accumulated charge before the next exposure. In other words, every pixel is

simultaneously exposed equally. This mechanism function is very beneficial when an image changes from frame to frame, which occurs with the large-area iQID. Overall, this global shutter mechanism is vital to the decay time (i.e., shutter time) of the CMOS sensor. Other parameters that play an essential role in the event rate of most photon-counting CCD/ CMOS detectors are decay times of the

scintillator/phosphor screens, CMOS sensor frame rate, and cluster size or the number of pixels per cluster. Previously, CMOS image transfer time or frame overhead time (~ 45 Β΅s per frame) was

In this dissertation, count-rate capability and spatial pileup of LAiQID will be investigated further (Chapter 4).

2.2.7 Image Processing

Once the irradiance pattern is amplified and imaged onto a CCD/CMOS camera sensor, the resulting image and corresponding data are saved into a list-mode file for further post-processing. Post- processing analysis can occur due to high-performance graphics processing units (GPUs) by NVIDIA and multi-core computer processing units (CPUs) for real-time event detection and location estimation.

This mechanism compresses data to a smaller size to account for computer storage/memory for later use but contains relevant information for further processing. This list-mode file includes 2D position

estimate, timestamp, summed pixel values within an event cluster or cluster intensity, area covered by event clusters or cluster area, the shape of cluster events, or eccentricity, and other associated event pixels. LABVIEW is an acquisition software coupled to the CMOS sensor to acquire the raw images for further processing. Image data from the MCP's phosphor screen is required to be appropriately

synchronized with the image acquisition software LABVIEW to capture the intensified image on the CMOS sensor. Table 2 shows the imaging parameters set in LABVIEW. During post-processing, a separate 2D image can be generated from the list-mode file to estimate each event cluster's centroid in real-time, based on pixel values that have reached a certain imaging threshold. This entire post-

processing mechanism is referred to as frame parsing. As shown in Figure 15, an example of an event cluster collected within a 21 x 21-pixel image until the entire 2448 x 2048 CMOS image is processed and saved as a list-mode file.

Figure 15: Example 21 x 21 event cluster large-area iQID image of a Am-241 gamma source. Imaging parameters were set a nominal setting of a threshold of 60, intensifier gain of 0.07V, CMOS shutter time of 49ms.

Table 2: Current Operating Settings of LA- iQID

DAQ Parameter Value

CCD Frame rate (fps)* 20

Shutter (ms)* 49.7497

CCD gain (dB)* 47.9943

Cluster size* 21 Γ— 21

Filter type Median 3 Γ— 3

CCD pixels 2448 Γ— 2048

Threshold* 55 to 85

The frame parsing algorithm consists of four main steps that will be discussed in detail here.

Firstly, raw CCD frames are acquired from radiation interactions within the scintillator above the required imaging threshold. This dissertation sets the minimum threshold necessary for an event to be collected to 60 pixels. Secondly, a 3x3 median-filter is used to remove any β€œnoisy”, β€œdead,” or small- island pixels from the CMOS sensor out of the image. Thirdly, this filtered image with individual particles is identified as event clusters based on a connected component labeling algorithm. This dissertation describes an event cluster as 4 pixels connected with an intensity of more than 10 pixels using the MATLAB connected component algorithm. Lastly, necessary information, as described above, is saved to a list-mode file for further processing. These list-mode files allow for post-processing

analysis based on its performance in terms of spatial resolution, energy response, minimum detectable activity (MDA), detector uniformity, and count-rate capability. Chapter 3 below will discuss these

Additional studies regarding an advanced frame parsing algorithm have occurred, which provided flexibility for event estimation and removed noisy pixels from the central spot background from XX1332 image intensifiers [8].

Dalam dokumen List of Tables Page (Halaman 64-73)