Indian Currency Note Denomination Recognition in Color Images
Hanish Aggarwal1, Padam Kumar2
1,2Department of Electronics and Computer Engineering Indian Institute of Technology Roorkee, Roorkee, India
1[email protected], 2[email protected]
Abstract: It has been estimated that every five seconds, a person in the world goes blind. These visually impaired people have a reduced perception of the world around them and thus face a lot of difficulty in carrying out their day to day tasks. Even their problems go unnoticed from the sight of the normal people. The Indian currency notes have a size difference of just ten mm between two consecutive denominations and make it highly unlikely for a blind person to determine it correctly. Also, the new one rupee coin has the same shape and size as was of the old fifty paisa coin while both are in circulation and hence have become undistinguishable now. These problems make visually impaired people unable to do everyday transactions easily. As a part of currency recognition system for visually impaired, we have already developed an efficient currency note localization algorithm that localizes currency notes in color images. This paper uses the localized regions of the image and feeds it into the recognition module for determination of the denomination of currency note. We here present the developed technique to recognize the denomination of the Indian currency note from the color image and the results obtained, thereafter.
Keywords: Image Processing; Computer Vision; Feature Extraction;
1. INTRODUCTION
The currency system is prevalent in India since a very long time. The Government of India introduced its first paper money issuing 10 rupee notes in 1861. These were followed by 20 rupee notes in 1864, 5 rupees in 1872, 10,000 rupees in 1899, 100 rupees in 1900, 50 rupees in 1905, 500 rupees in 1907 and 1000 rupees in 1909. In 1917, 1 and 2½ rupees notes were introduced [1].
The Reserve Bank of India (RBI) began note production in 1938, issuing 2, 5, 10, 100 and 1000 rupee notes, while the Government continued to issue 1 rupee notes.
Currently, the Indian currency system has the denominations of Rs. 1, Rs. 2, Rs. 5, Rs. 10, Rs. 20, Rs. 50, Rs. 100, Rs. 500, and Rs. 1000. All the above mentioned denominations are unique in one feature or the other. These features may be color, size or some identification marks etc. it is very easy to recognize these features for the sighted people but not for the visually impaired. These visually impaired people can distinguish between two different denominations using the different size of notes, but the size variation alone is not enough to flawlessly determine the currency note. In reality, the very little difference between the sizes of consecutive denominations makes them confused and unable to distinguish the currency notes from one another.
The currency notes are provided with few special identification marks only for the blind people so that they may easily recognize the denomination correctly. Every currency note has its denomination
engraved at the top right end which is sensitive to touch, but this mark fades away after the currency note goes in circulation for some time. This again creates a difficulty for the visually impaired people to correctly determine the denomination of the currency note.
A currency recognition system for visually impaired has been developed in this paper. It uses a currency localization technique [2] to extract the currency note from a color image. The technique requires feature based currency note localization.
This is applied using the Image Processing toolbox available in Matlab.
It has been observed till now that Neural Networks (NN) are used to recognize the currency notes which involve some texture or pattern based recognition techniques.
The identification of objects in an image is called recognition. This process would probably start with image processing techniques such as noise removal, followed by (low-level) feature extraction to locate lines, regions and possibly areas with certain textures. The clever bit is to interpret collections of these shapes as single objects, e.g.
cars on a road, boxes on a conveyor belt or cancerous cells on a microscope slide. One reason for this is an AI problem that an object can appear very different when viewed from different angles or under different lighting. Another problem is of deciding what features belong to what object and which are background or shadows etc. The human visual system performs these tasks unconsciously but a computer requires skilful programming and lots of processing power to approach human performance.
In this paper, the currency recognition system is presented. For this, a camera with a minimum resolution of 640 X 480 is used to capture the images of the Indian currency notes. These images are then given as an input to the localization code implemented in Matlab and the currency note is localized in the image. Subsequently, our currency recognition technique is applied on the localized image. This technique uses color based recognition to determine the denomination of the currency note.
The remainder of the paper is organized as follows:
Various existing currency recognition techniques that involve texture, patterns or color based recognition techniques are described in section 2.
These have been successfully applied on some non Indian currencies. Section 3 describes the step by step design of currency recognition system that has been developed in this paper. The problems faced during recognition are discussed in section 4 along with the results obtained after successful implementation of the proposed technique.
2. RELATED WORK
2.1 Texture Based Recognition Techniques Texture is a very useful feature for Currency recognition. Textural features corresponding to human visual perception are very useful for optimum feature selection and texture analyzer design. There are some set of texture features that have been used quite frequently for image retrieval.
Tamura features (coarseness, directionality, contrast), Tamura coarseness is defined as the average of coarseness measures at each pixel location inside a texture region. This type of features can compute directly from the entire image without any homogeneity constraint. So the performances of this feature are not satisfactory in general. As a result, an improved version of this feature by representing the coarseness information using a histogram should be considered.
Multi-resolution simultaneous auto-regressive model canny edge histogram, MRSAR feature gives the results that are quite powerful in distinguishing different texture patterns. The Gabor feature use filters to extract texture information at multiple scales and orientations.
As for texture features, there is a comparison of the performance of Tamura features, edge histogram,
MRSAR, Gabor texture feature, and pyramid- structured and tree-structured wavelet transform features [3]. According to author the experimental results indicated that MRSAR and Gabor features outperform other texture features. However, in order to achieve such a good performance from MRSAR, the Mahalanobis distance based on an image-dependent Covariance matrix has to be used, and therefore, it increases the size of feature and search complexity. On the other hand, the extraction of Gabor feature is much slower than other texture features, which makes its use in large databases. In general, Tamura features are not as good as MRSAR, Gabor, TWT and PWT features.
2.2 Placement Rule
The In the past, there was some difficulty in texture analysis due to lack of adequate tools to characterize different scales of texture effectively.
There are some texture based techniques. The work done in this area was carried out by Hideyuki Tamura et al. in [4]. According to him, a strict definition for visual texture is difficult .Its structure is simply attributed to the repetitive patterns in which elements or primitives are arranged according to a “placement rule”. Hence it can be written as
f= R (e)
Where R is denoting a placement rule (or relation) and e is denoting an element.
There is a set of features by which all input patterns are measured and which give well-distributed results. For this purpose, it is required to have both extremes defines for each feature, e.g., coarse versus fine for coarseness. Especially, coarseness is a highly essential factor in texture. In order to improve the other features, its results should be utilized. With respect to line-likeness, regularity, and roughness, and obtained considerable correspondences between computational and psychological measurements, but more Effort is required to describe precisely the texture elements on which these three features depend.
2.3 Pattern Based Recognition Techniques The Pattern recognition is conclusions based on prior knowledge. A form of this is the classification of objects based on a set of images. There are number of techniques exist in the literature which
make use of pattern recognition as a feature to some of the good problems. These techniques are broadly focused on Vector quantization based histogram modelling.
Vector quantization (VQ) is a method of sampling a d-dimensional space where each point, xj, in a set of data is replaced by one of L prototype points.
The prototype points are picked such that the sum of the distances (called the distortion) from each data point, xj, to its nearest prototype point is minimized.
The work done in this area was carried out by Seth McNeillIn et al. in [5]. Author gives the method for recognition of coins by pattern recognition. This program mainly differentiates between the bald eagle on the quarter, the torch of liberty on the dime, Thomas Jefferson's house on the nickel, and the Lincoln Memorial on the penny. First he collects the data, during the data collection stage various background colors, including black, white, red, and blue, were tested for segmentability.
Adobe Photoshop was used to determine the RGB values of the coin and its background. Then a Segmentation program was applied to these images. After the data collection next step is Coin Segmentation and Cropping. In this step coins were segmented from their backgrounds using some modification of Nechyba’s code [6]. After completion of segmentation cropping program was implemented to locate the edges of coin. After this Features were extracted from the coins by convolving texture templates with each image, with edge detection templates. Next step was training, in these Five dimes, nickels, pennies, and quarters were used for training data. The result of this method is 94% accurate.
2.4 Color Based Recognition Technique
The Wei-Ying Ma et al. in [7] describes Color histogram (CH) method for an image. It is constructed by counting the number of pixels of each color. Histogram describes the global color distribution in an image. It is easy to compute and is insensitive to small changes in (VP) viewing position. The computation of color histogram just involves counting the number of pixels of specified color. Therefore in an image of resolution m*n, the time complexity of computing color histogram is O (mn). It is quite insensitive to small change in VP this feature is particularly desired in this project as
the VP from which the image of currency note will be acquired can change. Color histogram method will suit when the segregation is to be done between a range of colors and a prominent color.
This method may suit the requirements when segregation is to done among almost similar colors.
Color histograms also have some limitations. Color histograms describe which colors are present in the image and in what quantities; color histograms provide no spatial information. Color coherence vector is a refined approach of coherence histogram. In this approach, the local properties of images are taken into consideration as contrast to CH method that is a global one. In this method, regions are based upon the coherency.
The work done in this area was also carried out by John R et al. in [8]. They proposed the techniques of color image retrieval. Color indexing is to extract the color content of the images and videos.
The technique proposed is to extract colors from images form a class of easily indexed meta-data.
The color-indexing algorithm uses the back- projection of binary color sets to extract color regions from images. This technique provides for both the automated extraction of regions and representation of their color content. It overcomes some of the problems with color histogram techniques such as high-dimensional feature vectors, spatial localization, and indexing and distance computation.
2.5 Shortcomings in Related Work
The currency recognition techniques that were mentioned above were using some texture, or pattern or any other feature to recognize currency notes. These techniques require a user to always carry a machine along with him/her. Also these machines consume a lot of power.
Hence, there was a need of some currency recognition technique which can overcome these shortcomings. The recognition technique discussed in this paper does not require the image to have the exact size as that of the currency note. It also takes a single image to determine the currency.
3. DESIGN OF THE CURRENCY LOCALIZATION SYSTEM
In India, Reserve Bank of India (RBI) holds the sole right to print currency notes. Currently, the Indian currency system has the denominations of
Rs. 1, Rs. 2, Rs. 5, Rs. 10, Rs. 20, Rs. 50, Rs. 100, Rs. 500, and Rs. 1000. All the above mentioned denominations are unique in one feature or the other. These features may be color, size, some identification mark etc as described in table 1.
The Color is one of the most important features used for the development of currency notes recognition system. Currency notes have a variety of colors and out of these, one color is more prominent which we use to recognize currency note. For example, the Indian currency note of Rs.
5 is known by its Green color since it is present in prominence. This feature can be used for currency note recognition system and we will exploit in the development of our system.
The technique used is based on a mixed approach.
It first localizes the currency note in the image and then applies the various threshold based algorithm to determine the denomination of the currency note.
The technique is briefly explained in the remaining part of this section. Our system works on the assumption that there is only one currency note present in the field of view taken by the camera.
There are many steps involved in the proposed currency recognition system. These are shown in Fig. 1.
Table 1. Various features present in Indian Currency Notes.
Serial no.
Currency Denomination
Major Color Component present in the
currency note
Size Identification Mark present in the left middle section of note
1 Rs. 5 Green. 117x63
mm -
2 Rs. 10 Orange-
Violet.
137x63 mm
- 3 Rs. 20 Red-Orange. 147x63
mm
Vertical Rectangle.
4 Rs. 50 Violet. 147x73
mm
Square.
5 Rs. 100 Blue-Green at centre, brown- purple at two sides.
157x73 mm
Triangle.
6 Rs. 500 Olive and Yellow.
167x73 mm
Circle.
7 Rs. 1000 Pink. 177x73 Diamond.
mm
In this paper, all the steps of the proposed recognition technique are independently explained and applied. Also, the results obtained of the implemented technique are presented.
Figure 1: Currency Localization System.
3.1 Acquiring Images
The image is taken from a mobile camera or a webcam and is stored in a 3D array. The coordinate of the pixel in 2D image is given by the first and second index of the array and the third index stores the RGB (Red-Green-Blue) intensities for each coordinate. Each element of array stores an unsigned 8 bit integer (0-255). The limit of first two indices of array determines the resolution of the image. In our current scenario, this limit is set to be 640 for the first index and 480 for the second index.
The images are taken under following assumptions:
The image is taken under proper lighting conditions.
No occlusion or shadowing is there and image is taken in a clear environment.
Distance of camera is nearly fixed from the object and within a small range of variation.
Resolution of the image is fixed to be 640 X 480 so that any basic camera can be used for the purpose of taking image.
The orientation of the currency notes was such that the sufficient amount of data required for further processing of even a single face was at least visible.
The currency notes are of good quality i.e. they are not very much full of stains or dust etc
The test image used to demonstrate the proposed technique is shown in Fig. 2.
Figure 2. Original image taken by the Camera.
3.2 Currency Note Localization
The image obtained from the camera may not be directly used for localization and requires enhancement. It involves applying some procedures like “Noise Reduction” [9], “Normalization” and
“Contrast Enhancement”. These are all standard techniques and can be easily applied.
Next we subtract the background from the image and convert it from RGB to gray. This is done because to localize a currency note, what we need is to know whether a pixel (which can be used to form an edge) is present in the image or not. After this conversion of the image, we detect the edges present in the image using some edge detection techniques present in the Image Processing Toolbox of Matlab. The common edge detection methods are the Sobel operator, the Prewitt operator and the Canny operator etc. The Canny operator uses two thresholds to detect strong and weak edges. It includes the weak edges in the output only if they are connected to strong edges.
As a result, the method is more robust to noise, and more likely to detect true weak edges. Therefore the Canny operator is selected in our technique to detect the edges prominent in the note.
Currency note localization is done by applying scan line algorithm on the image after edge detection.
The number of pixels present in each line is counted while the image is scanned from left to right line by line. The line that contains the number of pixels greater than the set threshold is highlighted (marked). Likewise is applied from top to bottom. As a final point, we have a distinct area produced by the intersection of both the scans. It is in the form of a rectangle which surrounds the currency note present in the image. This forms the localized part of the image. The threshold used to
mark the line here is set after a lot of experiments performed with different thresholds on the given set of currency notes. Since there is a fixed set of denominations of Indian currency notes present, this threshold is absolutely justified. These features present in the currency notes have not changed over a long period of time and will continue to be like this only for a foreseeable future. The resulting images after applying localization technique are shown in Fig. 3-4.
Figure 3. Image after applying localization technique.
Figure 4. Image obtained after Currency note Localization.
3.3 Currency Recognition Technique using Color Matching
After the localization of Indian currency note, our next step in the algorithm is color matching. Every colored image has RGB color pixels in it. For example, if we take image of Rs. 100 Indian currency note, color of image is blue and green since these are the dominating colors of the currency note. When we apply color matching algorithm on this image, it gives the maximum pixels of blue and green colors.
The Color Threshold module is used to remove parts of the image that fall within a specified color range. This module can be used to detect objects of
consistent color values. The interface displays the Red, Green and Blue histograms. Histograms chart has pixel value (0-255) on the X axis and number of pixels (0-image size) corresponding to that color value on the Y axis. Using histograms, we can filter pixels with those values out of the image leaving the desired object in view.
If R<R_min_thres or R>R_max_thres then R=0 If G<G_min_thres or G>G_max_thres then G=0 If B<B_min_thres or B>B_max_thres then B=0 Here the min_thres and max_thres for the three RGB components is the minimum and maximum threshold limit present in any currency note and is determined by experimenting on various different values.
We have the images of the currency note of Rs.100 which are recognized by their color. Indian currency note of Rs.100 has blue and green color in prominence and is recognized on the basis of RGB color pixels. We have studied the color patterns of various currency notes and then drawn graphs on the basis of RGB color values. The image resulted after applying localization technique in shown in Fig. 5.
4. RESULTS
A lot of images of currency notes were taken from their front as well as back in different positions with the help of a mobile camera. The designed algorithm was then applied on these acquired images of Indian currency notes to find the denomination of the currency note. After we follow the above mentioned design steps of the proposed recognition scheme, the currency note in the image gets recognized successfully.
The developed system is tested for various below mentioned conditions.
Images were taken from varied distances.
Blurred images were also tested by the system.
Different orientations of currency notes were taken.
Images were also taken which had only 60 – 80% of the visibility of the face of the currency note.
Images of currency notes with little bit of dust were taken.
Other objects like paper etc were also tried on the system so as to check whether they are detected as currency notes or not.
All these conditions are tested and the Currency Recognition System is found to be working efficiently with 96% accuracy.
The final localized currency note is shown in the Fig. 5.
Also, there are some conditions for maintenance of the system too. These are given below.
If there is a change in the color of the currency note, then there is a need of change in the threshold values in the algorithm.
If a new currency note is released in the market, then there is a need of subsequent addition in the code.
Figure 5: Image obtained after successful application of Currency note Recognition
System.
5. CONCLUSION
We have developed an interactive system that generates Currency Recognition System using Localization and Colour Recognition with the help of MATLAB. The Indian currency notes have been correctly recognized and the denomination has been found with a high level of accuracy. This system has much advancement over the existing systems and we can confirm the following observations.
It is possible to localize the currency note and subtract it from its background.
The system adopts the interactive techniques of Currency Localization and Color Recognition.
The system allows the user to identify the Currency note.
The system is unique in its applications.
The efficiency of our system is 96%.
The currency note under the anticipated technique has been recognized successfully. The system can be enhanced by incorporating template matching. It
is planned further to recognize the currency note from the localized image using other remaining features of the currency notes as well.
REFERENCES
[1]. http://www.rbi.org.in/currency/ (accessed October 2011).
[2]. Hanish Aggarwal and Padam Kumar,
“Localization of Indian Currency Note in Color Images”, ICCCNT 2012.
(Unpublished).
[3]. Wei-Ying Ma and HongJiang Zhang,
“Benchmarking of Image Features for Content-based Retrieval” Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304-1126.
[4]. Hideyuki Tamura, Shunji Mori, and Takashi,
“Textural Features Corresponding to Visual Perception”, Member IEEE.
[5]. Seth McNeill, Joel Schipper, Taja Sellers, Michael C. Nechyba “Coin Recognition
using Vector Quantization and Histogram Modelling” Machine Intelligence Laboratory University of Florida Gainesville, FL 32611.
[6]. Michael C. Nechyba, “Vector Quantization a limiting Case of EM”, EEL6825: Pattern Recognition Class Material, Fall 2002.
[7]. Jing Huang, S Ravi Kumar, Mandar Mitra, Wei-Jing Zhu, Ramin Zabi, “Image Indexing Using Color Correlograms”, Cornell University Ithaca, NY 14853.
[8]. John R. Smith and Shih-Fu Chang, “Tools and Techniques for Color Image Retrieval”, Columbia University Department of Electrical Engineering and Centre for Telecommunications Research New York, N.Y. 10027.
[9]. Richard Alan Peters II “A New Algorithm for Image Noise Reduction using Mathematical Morphology”, IEEE Transactions on Image Processing, Volume 4, Number 3, pp. 554- 568, May 1995.