• Tidak ada hasil yang ditemukan

3D Image Reconstruction of Retina using Aplanats

N/A
N/A
Protected

Academic year: 2023

Membagikan "3D Image Reconstruction of Retina using Aplanats"

Copied!
51
0
0

Teks penuh

A quality image of the internal parts of the eye means a lot of information to the doctor about the patient. They focus light entering the eye onto a curved screen at the back of the eye called the retina.

Fundus Photography

Fundus imaging will record the retina, optic disc, macula, and blood vessels in it. The fundus can be imaged directly from the pupil, which serves as the entrance and exit for the illuminating light from the fundus cameras.

Fundus Camera

The illumination system, the eye, and the imaging system must be properly aligned and focused so that light reflected from the retina exits the unilluminated central hole. A Fundus Camera may also have a set of filters in front of the flash light and also astigmatic correction devices and diopter compensation lenses to correct the image being acquired.

Problems with Existing Fundus Cameras

Disadvantages of and due to lenses

In many cases these aberrations can be compensated to a large extent by using a combination of simple lenses with complementary aberrations. A compound lens is a collection of simple lenses of different shapes and made of materials with different refractive indices, arranged one behind the other with a common axis.

Figure 1.1: Spherical Aberration.
Figure 1.1: Spherical Aberration.

Our Proposal

Defines the cone of light from a point source at the focal point received by the optical system. Applanate magnification is given by the negative ratio between the input and output numerical values ​​of the aperture.

Figure 2.1: Unfolded aplanat.
Figure 2.1: Unfolded aplanat.

Advantages over Lens

The sine of θo and φ gives us the numerical input and output aperture values, NA1 and NA2. 2.1, ρ defines the length of the light ray between the focal point 'O' and the intersection point on the primary mirror, l defines the length between the intersection points on the primary and secondary mirrors and r defines the length between the intersection point on the secondary mirror and the focal point ' F'.

Imaging Mechanism

The second equality holds for each θ and correspondingφ of any light ray from point.

Derivation of Aplanatic Equations

Specifying equations

Derivation

From the reversibility of the system, the secondary mirror will have the same contour equation but 'm' replaced by 'm1'.

Special Case

Thermodynamic Limit

The thermodynamic limit is given by the area on the image plane with the magnification size (of aplanat) times the size of the source.

Numerical Aperture

Inferences from Simulations

Moving the light source axially from focus

Simulations were performed by moving the light source axially from the focus and the following setup was considered. From the table we deduce that when the light source is in focus, we have a large fraction of light entering the aplanate to lie in the thermodynamic limit.

Moving the light source both axially and radially

Changing the LED size and moving it radially

We can deduce that as the size of the source increases, the luminosity decreases in the thermodynamic limit for the same amount of radial displacement. As the source is moved radially off-axis, the luminosity decreases, falling in the thermodynamic limit.

Limitations and Trade Offs of Aplanats

The ray tracing program is written in Python, with assistance from the MATLAB toolbox Optometrika, using object-oriented programming concepts. Ray tracing program library implements analytical and iterative ray tracing approach to optical imaging using Snell's laws of refraction and reflection. Currently, the library implements lenses, aplanates, flat and curved screens, a realistic model of the human eye with accommodating lens and spheroidal retina.

The simulation is set up by creating the objects, and the user must arrange the objects in the order they interact with the rays from the light source. The simulation stops when all the objects in the sequence are met by the rays.

Figure 3.2: Double reflections avoided with adjustments in parameters.
Figure 3.2: Double reflections avoided with adjustments in parameters.

Principle of Raytracing

Principle of finding ray intersection points

The optical surface has an equation, and to find the point of intersection of the ray on it, we substitute the ray equation into the surface equation. Solving it and substituting it into the radius equation gives the coordinates for the point of intersection. Depending on the phenomenon that occurs near the surface, ie. reflection or refraction, the normals at the intersection points on the surface are calculated and the appropriate formulas are applied to calculate the initial positions, directions, wavelengths, intensities, attenuations, and reflected or refracted colors. the rays.

The rays that miss the surface are taken care of by excluding them from plotting.

Raytracing inside aplanat

But due to the complexity of the contour equations of the aplanate mirror, it was difficult to obtain a closed-form equation to solve the roots. Returns the roots of (non-linear) equations defined by func(x)=0, given the initial estimate. The initialization for the 'fsolve' optimizer determines the computation time and the precision of the intersection point computation.

For a given ray incident on either mirror surface, the point of intersection is chosen heuristically.

Optical Systems

  • Aplanat
  • Lenses
  • Screen
  • Eye Model

We can extract 2D histogram data containing the number of rays falling into each grid bin, taking into account the weight of each ray. The center of rotation of the eye was 13.3 mm behind the corneal apex, which means that it is 1.34 mm behind the center of the eye [20]. High precision isn't important because the eye's center of rotation actually moves by as much as 1mm as the eye rotates, so the idea of ​​the eye's center of rotation is only an approximation.

The front lens surface was modeled by a hyperboloid of revolution given by x= (1+k)R (1− ​​​​q. 1+k)+3h)), where the height of the hyperboloid from its tip to the diameter intersection is D. For the another was to demonstrate the accommodation of the human eye by minimizing the retinal image.

Figure 4.3: 2D and 3D plots of the aplanat surface.
Figure 4.3: 2D and 3D plots of the aplanat surface.

Performance of Ray Tracing Program

In each simulation, the setup consists of a human eye model, an applanate, and a display as shown in the figure. The pupil has a 2 mm diameter opening, one of the focal points of the applanate is located inside the eye close to the pupil and the screen. a little away from the second focus of the applanate. From these numbers we see that the image is formed on the narrowest cross-section of the beam of rays falling on the screen.

For the aplanate shown in that figure, angles 4o to 46o (from the first dot to the cluster) form non-cluster points and the remaining angles form clusters. The cluster at the end is formed because the points on the retina at those angles all fall close to the edge of the primary mirror.

Figure 5.1: 2D plot of eye-aplanat-screen.
Figure 5.1: 2D plot of eye-aplanat-screen.

Effect of Change of Aplanat Parameters

In most of our discussions, we refer to points on the retina with only the first parameter, since we mostly consider points that fall on the xz plane on the positive side of the z axis. Only a certain area between a lower angle and an upper angle of the retina can be imaged. We deduce that if we put a sensor along these non-clustered points, we can image the corresponding points on the retina.

Changing the lo causes a shift in the lower and upper corners of the retinal points being imaged. Changing the ρo andro changes the distribution of the images of points from the lower to the upper corner on the retina.

Bringing Image Points outside Aplanat

By changing the N A1 or N A2 we only increase or decrease the acceptance cone of the aplanat. Increasing the value will increase the value of the lower and upper retinal angles that can be imaged by the aplanate. With the increase, the distribution of images from points near the lower angle becomes wider and from points near the higher angle becomes closer.

Our Proposal

In this phase we image the central part of the retina, from 0 degrees to 10 degrees, using a mobile phone or other simple camera. The 3D images collected from these three phases are then combined to form the complete 3D image of the retina.

Figure 5.5: 2D plot showing image points of uniformly placed points on retina.
Figure 5.5: 2D plot showing image points of uniformly placed points on retina.

Increasing the Resolution

The first row is for the vertical angle and the second row is for the horizontal angles in both images. Table 5.4 below shows the exercise done for five single horizontal scales on the retina around the apex angles from table 7.3 using the aplanat from stage two.

Figure 5.15: Angle vs intensity plots for aplanat from first phase.
Figure 5.15: Angle vs intensity plots for aplanat from first phase.

Obtaining the Digital Painted Model of Retina

Mapping and Placing the Sensors

Principle used for 3D image Construction

The value of the sensor is therefore the sum of all the intensities contributed by each point in the patch. Which means that if we know what contributions each point made on that sensor, we can find the value collected by it. Consider the points on retina on x-z ​​plane on the positive side of the z-axis and their corresponding sensors.

The first method is to consider rays from the neighboring retina points on that sensor and calculate their contribution. Second method is to put another sensor on its side and look at the contribution on it from the existing longitude points, to get the relative weights.

Patch weights construction

So only the average of the RGB intensities of the retina model points that fall under the patch boundaries was taken. The blurring is due to equal weight being given to all the corners in the patch. Third and fourth rows are different views of the merged 3D images from both the aplanates.

All model points under the patch were considered for interpolating each point on the weight patch. The results include reconstructed 3D images of the retina from angles 0o to 100o, giving us a total field of view of 200o.

Figure 5.18: Different views of an example weight patch constructed.
Figure 5.18: Different views of an example weight patch constructed.

Future Developments

  • Cutting and masking the aplanats
  • Processing the constructed 3D image of retina
  • New illumination method
  • Varying human eye models

A signal processing technique must be developed by studying the nonlinear properties of the system and corrective transforms or filters must be designed. The process of injecting light into the eye and illuminating the retina evenly for the proposed system needs to be designed. In this thesis, all simulations were performed on a standard eye model in a relaxed state and the designed sensors depend on the shape of the eye and the power of that eye lens.

The future work should therefore also include factors such as different shapes of eyeballs and different powers of eye lens. Also with increase in bulge of the eye lens, the sensor shape will widen again.

Gambar

Table 3.4: Changing the LED size and moving it radially.
Figure 3.2: Double reflections avoided with adjustments in parameters.
Figure 4.1: Different shapes of light sources. From left to right: linearY, linearZ, circular, square, pentagonal, hexagonal, spiral, random.
Figure 4.3: 2D and 3D plots of the aplanat surface.
+7

Referensi

Dokumen terkait

The fact that both the“N100”and the“P200”linked components map onto auditory cortex may explain why they are sometimes consid- ered a single component, the“N1-P2.”However, our data