3720 test samples 33 Figure 4.2.3.4 DataLoader training and testing code snippets 33 Figure 4.2.4.1 Network architecture of CRNN license plate recognition model 34 Figure 4.2.4.2 Sequence of license plate number labels in 26 time steps 34 Figure 4.2.4.3 Comparison of size parameters in CNN layers between.
LIST OF TABLES
LIST OF SYMBOLS
LIST OF ABBREVIATIONS
Introduction
- Background Information
First, a license plate localization model was built to identify the location of license plates in an image. Second, a license plate recognition model was built to recognize the character in the localized license plate image. Local "A" and "P" license plate data were collected to improve dataset diversity and pattern recognition location.
License plate recognized from the two checkpoints will be matched to calculate average speed using time-distance approximation. We leverage the existing Malaysia License Plate dataset owned by Recogine Technology Sdn Bhd to build a dataset containing license plates from multiple regions in Malaysia. Finally, regarding license plate recognition, a robust model with generalized recognition locality can be built to improve the accuracy of identifying license plates from different states in Malaysia.
Literature Review
Speeding behaviour among drivers at the AES camera and camera-free zones
- License Plate Localization
The approach taken by [10] and [11] was to capture license plate image by pointing the camera directly at the license plate. Since the number plate is generally designed as a rectangle, this method used the geometric properties of the number plate shape for localization. After license plate localization, the license plate image character was repeated until each character was separated into individual subimages.
There were cases where the license plate extracted from the localization stage had problems, e.g. Horizontal projection was used to remove the upper and lower boundaries of the extracted license plate image. In [13], the segmentation process involved binarization and contouring on the extracted license plate before detecting the rectangle region.
System Methodology
Working Principle of Catch using Time-Distance Approximation
First, convoluted layers will automatically extract the feature sequence from each input license plate image. Second, recurrent layer will make prediction for each frame of the feature sequence and output a character score for each time step represented in matrix. The feature maps in folded layers were used to extract a sequence of feature vectors from an input license plate image.
Specifically, each feature vector in a sequence of features was generated from left to right into column-wise feature maps. To be more precise, the 𝑖𝑖-th feature vector referred to the 𝑖𝑖-th column of the feature map. As a result, each column of feature maps corresponded to each receptive field in the original input image.
As illustrated in Figure 3.2.1.1 below, each vector in the feature set was associated with a receptive field, and thus can be considered the image descriptor for that particular region. The returning layers predicted a label distribution 𝒴𝒴 = 𝒴𝒴1, …, 𝒴𝒴𝒯𝒯 for each frame 𝒳𝒳𝑇𝑇 in the feature sequence 𝒳𝒳 = 𝒳𝒳1, …, 𝒳𝒳𝒯𝒯. b) The structure of a bidirectional LSTM. Each time it receives a frame 𝒳𝒳𝑇𝑇 in the feature sequence, it updates its internal state ℎ𝑇𝑇 with a nonlinear function that takes both the current input 𝒳𝒳𝑇𝑇 and the past state ℎ𝑇𝑇-1 as inputs: ℎ𝑇𝑇 = 𝑔𝑔( 𝒳𝒳𝑇𝑇, ℎ𝑇𝑇- 1).
As shown in Figure 3.2.2.1 (a), LSTM consists of a memory cell and three multiplicative gates, namely input, output and forget gates. 21 a higher level of abstraction than a shallow one, thus can achieve significant performance improvements in the text recognition task. Taking Figure 3.2.1.1 as an example, the wide character "B" may require several consecutive receptive fields for a complete description, while the character "1" occupied only one receptive field.
O' and '0' by comparing the positions of the characters in the license plate number, rather than recognizing each one individually.
System Design
- License Plate Localization using TensorFlow Object Detection API
- Installation and Setup
- Data Collection
The training dataset was now an XML file containing the bounding box coordinates of the license plate images. We used the existing Malaysian license plate dataset owned by Recogine Technology Sdn Bhd to train a license plate recognition model. However, as shown in the frequency analysis performed on license plate data set below, the character 'W' appeared in almost 90% of the data sets, while the remaining characters accounted for only 20-30%.
Therefore, to improve the diversity of license plates, 4000 samples of A and P license plates were crowdsourced, respectively. The final data contained a total of 33720 license plate images with dimensions of 240px (width) x 120px (height). First, all license plate images were organized into a folder and renamed with the file name format label_number.jpg.
Then the original training and test license plate images were converted to LMDB data format. To ensure that the LMDB dataset was labeled with the correct ground truth, a license plate sample was randomly selected to check the labeled license plate number. The output channel of the CNN layer was 26, which means that the features of the input license plate image are taken from 26 time steps.
In this function, text represented the recognized license plate number, while cap_time represented the current time. In simple words, when a license plate is recognized, the license plate along with the current time will be recorded in a csv file. Next, we will compare the license plate number in both check_in_plate and check_out_plate.
If the same license plate numbers match, their respective check_in_time and check_out_time are read to calculate the time difference.
System Implementation
Hardware Setup
As shown in Figure 5.1.3 above, the image of the car was captured from a wide angle to calculate their speed on the spot. The camera will be facing the road perpendicular to the obstacle-free distance measurement. When the car enters the check-in point with the camera, the system will issue the detected car plate.
If the car was detected for high speed by the system, the car information will be displayed in the list of speeding violations.
System Evaluation and Discussion
Automatic License Plate Recognition .1 Evaluation Criteria
- Comparison between Candidate Models
- Evaluation Criteria
- Groundtruth
To compare the performance of Catch and the traditional speed trap in detecting overspeeding cars, we conducted a synthetic experiment at the UTAR Kampar campus by inviting 10 users to travel around the entire campus. For comparison purposes, we also installed a third camera at the check_in point to simulate the traditional speed trap system in Malaysia. This had proven the limitations of the current speed trap system in determining the excessive speed of the car.
In addition to conducting synthetic experiments, we also constructed a real-world experiment by installing cameras around campus to compare the performance of traditional speed traps and Catch in detecting overspeeding. The figure above showed the number of speeding cars recorded by the traditional speed trap and Catch in a day. 57 speeds in excess of the standard speed limit outside the camera zone and were not detected by the traditional speed trap.
The figure below shows the number of speeding cars detected by the traditional speed trap and Catch from week 1 to week 4. Meanwhile, the following table summarized how many speeding cars Catch caught compared to a traditional speed trap for each week. It can be seen that although a traditional speed trap can detect a significant number of speeding cars, it is still significantly less than what Catch detected.
Within a week, Catch was able to catch about 440 to 490 speeding cars that had escaped the traditional speed trap. Number of overspeeding cars caught more than traditional speed traps in 4 weeks Week 1 Number of overspeeding cars detected. Total number of speeding cars More caught in a month than traditional speed traps Month Number of speeding cars detected.
Based on our experimental results, it was impossible to be detected by the traditional speed trap, as long as a speeding car can slow down to the speed limit at the camera zone in time.
Conclusion
- Overview
The proposed speeding detection system – Catch was only suitable to be implemented at the two endpoints of an undivided road with no intersection, for example. For highways with many exits in between, instead of deploying the many pairs of Catch cameras along the highway, we can use the check-in and check-out times as drivers enter or exit the toll plaza to calculate their average speed. the highway. Iranmanesh, "Malaysian Automatic Number Plate Recognition System Using Pearson Correlation" presented at the IEEE Symposium on Computer Applications &.
Wijers, “Average Speed Enforcement: Improves Road Safety and Gets Better Public Support,” Making Traffic Safer. Yusof, "Journal of the Eastern Asia Society for Transportation Studies", The Automated Speed Enforcement System, vol. Kartiwi, “Design and Implementation of Automatic License Plate Recognition on Android Platform”, presented at the International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, 8
Su, “Fast license plate location and recognition using wavelet transform on Android,” presented at the IEEE Conference on Industrial Electronics and Applications (ICIEA), Singapore, Nov. Do et al., “Automatic license plate recognition using mobile device” , presented at the International Conference on Advanced Communication Technologies (ATC), Hanoi, Vietnam, Dec. Chen, “Identification of Chinese license plates based on the Android platform”, presented at the International Conference on Computational Intelligence and Communication Technology (CICT), Ghaziabad, India, July.
Patricia, "Journal of Telecommunications, Electronics and Computer Engineering," A Review of Automatic License Plate Recognition System in Mobile Phone Based Platforms, vol.10, no.3-2, pp.
FINAL YEAR PROJECT BIWEEKLY REPORT
- WORK DONE
- WORK TO BE DONE Car plate localization
- PROBLEMS ENCOUNTERED
- SELF EVALUATION OF THE PROGRESS Still on track
- WORK TO BE DONE Over-speeding detection
- WORK TO BE DONE System Implementation
- WORK TO BE DONE System implementation
Captured car plates cannot be matched properly to calculate average speed due to blurred image. Parameters of originality required and limits approved by UTAR are as follows:. i) Overall similarity index is 20% and below, and. ii) Matches of individual sources listed must be less than 3% each, and (iii) Matching texts in continuous block must not exceed 8 words. Note: Parameters (i) – (ii) will exclude citations, bibliography and text matches that are less than 8 words.
Note Supervisor/candidate(s) are required to deliver a soft copy of the full set of the originality report to the faculty/department. Based on the above findings, I hereby declare that I am satisfied with the originality of the final project report submitted by my students as mentioned above. Form Title: Supervisor's Comments on Originality Report Generated by Turnitin for Final Project Report Submission (for undergraduate programs).
UNIVERSITI TUNKU ABDUL RAHMAN