• Tidak ada hasil yang ditemukan

A survey on ear biometrics - MSU CSE

N/A
N/A
Protected

Academic year: 2023

Membagikan "A survey on ear biometrics - MSU CSE"

Copied!
35
0
0

Teks penuh

In addition, the ear shifts from the side of the neck to a more cranial and lateral location. Hills 1–3 form the first arch of the ear (tragus, helix and cymba concha), while hills 4–6 form the second arch of the ear (antitragus, antihelix and concha). The salient stages of a classic ear recognition system are illustrated in Figure 4. 1) Ear detection (segmentation). The first and most important stage involves locating the position of the ear in an image.

Here, a rectangular border is typically used to indicate the spatial extent of the ear in the given image. In some cases, a curve that closely fits the outer contour of the ear can be drawn out. 3The term is used to indicate that the identity of the images in the database is known.

Bertillon made use of the description and some measurements of the ear as part of the Bertillonage system used to identify recidivists. They located the ear using deformable contours on a Gaussian pyramid representation of the image gradient [Burge and Burger 1997]. They represented the ear feature vector as a combination of the outer ear shape and the structure of the inner ear.

This device is used to adjust the height of the camera according to the height of the subject.

Fig. 2. The human ear develops from auricular hillocks (center) that arise in the 5 th week of embryonic development
Fig. 2. The human ear develops from auricular hillocks (center) that arise in the 5 th week of embryonic development

EAR DETECTION

  • Computer-Assisted Ear Segmentation
  • Template Matching Techniques
  • Shape Based
  • Morphological Operators Techniques
  • Hybrid Techniques
  • Haar Based

In the context of 3D ear detection, Chen and Bhanu [2004] used a model-based (template matching) technique for ear detection. 13The shape index is a quantitative measure of the shape of a surface at any point, and is expressed as a function of the maximum and minimum principal curvatures. They used appearance-based features with low computational cost for segmentation, and a learning-based Bayesian classifier to determine whether the segmentation output is incorrect or not.

The technique first separates skin areas from non-skin areas and then searches for the ear within the skin areas using a template-matching approach. This transformation can emphasize tubular structures such as the spiral of the ear and eyeglass frames. Taking advantage of the elliptical shape of the helix, this method was used to segment the ear region.

In the range images, it was observed that the magnitude of the fringe is larger around the helix and antihelix. They then segmented the contour of the ear using an active contour initialized around the tip of the ear. They speculated that all incorrectly segmented images in these two situations could be correctly segmented using a combination of color and depth information; however, experimental results to confirm this have not been reported.

This technique is widely known in the field of face detection as the Viola-Jones method [Viola and Jones 2004]. They trained the Adaboost classifier to detect the ear region, even in the presence of occlusions and image quality degradation (eg, due to motion blur). They reported a very good detection rate even when there were multiple subjects in the same image.

The main disadvantage of the original Viola-Jones technique is the training time, which in some cases can take several weeks. 2008] modified the original face detection approach to reduce the complexity of the training phase of the naive Adaboost by two orders of magnitude. They presented experiments showing robust detection in the presence of partial occlusion, noise, and multiple ears at different resolutions.

Table I. Accuracy of Carious Ear Detection Techniques
Table I. Accuracy of Carious Ear Detection Techniques

EAR RECOGNITION SYSTEMS

  • Intensity Based
  • Force field
  • Fourier Descriptor
  • Wavelet Transformation
  • Gabor Filters
  • Scale-Invariant Feature Transform (SIFT)

They reported that the recognition rate for the multiposition ear was 60.75% compared to 43.03% using plain LLE. They reported a 97.7% rank-1 recognition rate in the presence of large bag variations using 60 subjects from the USTB database IV. From each image, the ear part is cropped manually, and there is no need for normalization of the ear area.

The fixed size frame was manually adjusted by eye to surround and crop the ear images. For each subject, one image was used for training where the ear region was detected using external contour matching. They manually cropped the ear from the original images and did some pre-processing such as filtering and normalization.

He extracted the ear contours and center from the ear image, then constructed concentric circles using that center. He defined two feature vectors for the ear based on the points of interest between the various contours of the ear and the concentric circles. Sana and Gupta [2007] used a discrete Haar wavelet transform to extract the texture features of the ear.

This technique makes it possible to consider the changes in the ear images simultaneously along three basic directions. Experimentally, they showed that for the ear modality, their proposed normalization and UCN reduce the EER from ~11.6% to ~7.6%. Using the Carreira-Perpinan database (segmented ear images) [Carreira-Perpinan 1995] they reported a recognition rate of 78.8%.

They developed an ear skin color model using the Gaussian Mixture Model (GMM) and clustered the ear color pattern using vector quantization. The ear recognition module using the ear helix/antihelix and the Local Surface Patch (LSP) representations, (c) [Chen and Bhanu 2007]. In an identification scenario, their algorithm achieved a rank 1 recognition rate of 97.8% using 415 subjects from the UND databases with 1,386 probes.

Only the 3D geometry that resides within a sphere of a certain radius roughly centered on the earhole is automatically segmented. First, they automatically segment the ear region using template matching and they reconstructed 2.5D images using the Shape from Shading (SFS) scheme.

Fig. 14. Force field line formed by iterations [Hurley et al. 2000].
Fig. 14. Force field line formed by iterations [Hurley et al. 2000].

MULTIBIOMETRICS USING THE EAR MODALITY

  • Frontal Face and Ear
  • Face Profile and Ear
  • Face, Ear, and Third Modality
  • Multi-Algorithmic Ear Recognition
  • Right Ear + Left Ear

For the detection, they first used a 2D Haar-based ear detector, then cropped the corresponding 3D segment. They used a database that contained 1031 datasets representing 525 subjects: 830 datasets representing 415 subjects from the UND database, and two hundred and one 3D polygonal datasets from 110 subjects. The first method combined 3 images of the same modality using majority voting, while the second method merged the output of the two modalities using the AND rule.

The method was evaluated using the largest publicly available databases (FRGC v2 3D face database and the corresponding ears of the UND database, collections F and G). The ear segment in each frame was independently reconstructed using the shape of shadow method. 2009] fused 3D local features for ear and face at the score level, using the weighted sum rules.

They used the FRGC v2 3D face database and the corresponding ears from the UND databases, collections F and G, and obtained a rank-1 identification rate of 98.71% and a verification rate of 99.68% (at 0.001 FAR) for neutral face achieved expression. They also used a synthesized database where the face frontal images were taken from BANCA database [BaillyBailliere et al. They used the USTB database [USTB 2005] and achieved a recognition rate of 96.84% using the weighted sum rule.

They performed decision fusion using a weighted sum rule, where the weights are obtained by solving the corresponding Lagrangian. For the face database, they used the Olivetti Research Lab (ORL) database [Samaria and Harter 1994], which contains 400 images, 10 each of 40 different subjects. For signatures, they used 160 signatures with 8 signatures of 20 individuals from the Rajshahi University database [RUSign 2005].

They tested using the USTB database and achieved a rank-1 recognition rate of 55%, compared to 45% for the KPCA and 30% for the ICA alone. They tested with the remaining 7 images, and reported an accuracy of 95.32% for the fused template versus 88.33% for the non-fused template. They achieved a rank-1 recognition rate of 95.1% by fusion versus 93.3% for the left ear or right ear alone.

Table III. Ear in Multibiometric Systems
Table III. Ear in Multibiometric Systems

OPEN RESEARCH AREAS

  • Hair Occlusion
  • Ear Symmetry
  • Earprint
  • Ear Individuality

Controllable features encode rich discriminative information of local structural texture and provide guidance for shape location. The own ear molding technique was used for the final classification. conducted a small experiment to demonstrate how merging the results from the two ears can improve the results14. They evaluated images of the ear that were partially occluded at the top, middle, bottom, left, and right sides of the ear, as shown in Figure 20.

During recognition, given the profile image of the human head, the ear was registered and recognized from various features selected by the model. A comparison with PCA was presented to demonstrate the advantage of the proposed model in handling occlusions. The subject's right ear was used as the gallery and the left ear as the probe.

Finally, they presented a case study where they fused the image results of right-right and left ear as gallery and reflected right ear as probe (using the weighted sum rule). It is commonly argued that earmarks should be unique to an individual as the structure of the ear is unique to an individual. The parts of the ear most commonly found in earprints are the helix, anti-helix, tragus, and anti-tragus, while less obvious features include the earlobe and the crus of the helix (as shown in Figure 22). [Meijerman et al.

However, individualization is disrupted by several factors that cause significant variations in the earprints of the same ear. Changes in the shape and size of the ear are the result of aging [Meijerman et al. Because of these factors, even two impressions of the same ear are not exactly the same.

First, they used a grid system using two anatomical landmarks to standardize marker localization. Labels were assigned to each impression at the points of intersection of grid lines with anatomical structures. Prior to the matching process, manual annotations were performed on impressions and labels to facilitate image segmentation and locating anatomical points.

Fig. 17. Ear thermogram, (c) [Burge and Burger 2000]. IEEE, reprinted with permission.
Fig. 17. Ear thermogram, (c) [Burge and Burger 2000]. IEEE, reprinted with permission.

SUMMARY AND CONCLUSIONS

InProceedings of the IEEE International Conference on Advances in Computational Tools for Engineering Applications (ACTEA).380–385. Proceedings of the IEEE International Symposium on Systems and Control in Aerospace and Astronautics (ISSCAA).

Gambar

Fig. 1. The ear biometric has tremendous potential when the side profile of a face image is available
Fig. 2. The human ear develops from auricular hillocks (center) that arise in the 5 th week of embryonic development
Fig. 4. The block diagram of a typical ear recognition system.
Fig. 3. External anatomy of the ear. The visible flap is often referred to as the pinna
+7

Referensi

Dokumen terkait

Avondale University Avondale University ResearchOnline@Avondale ResearchOnline@Avondale Science and Mathematics Papers and Journal Articles School of Science and Mathematics