LITERATURE REVIEW
CHAPTER 4 SYSTEM DESIGN
4.4 Flow Charts of this System
The flowcharts of this proposed system has described below alongside with their pseudocode.
4.4.1 Overall Flowchart
The overall flowchart of the system has shown in Fig 4.2. The system will begin processing by taking an input image. Usually the input image causes distortion due to camera lenses. The optical distortion is caused by the optical design of lenses and is therefore often called lens distortion, perspective distortion is caused by the position of the camera relative to the subject or by the position of the subject within the image frame. Then this distortion is removed by further processing and obtained undistorted images. For edge detection the image has run through color
55
filters and edge detection algorithm. We have used Canny edge detection algorithm and further improved the result using Sobel Operator. Then the wrap point has been set out to generate the image template. The image then passed through the CNN classifier for a continuous lane detection approach and overlaid with the original images to show the output.
Start Input Distorted Image Processing
Undistorted Image
Color Filters Edge Detection
Stable Image
Warp Point Perspective Warp (Birds
Eye View)
Calculation of Picture elecments Curve Fitting (Using 2nd
Order Polynomial)
Feeding data through CNN Overlaying with Original Image (Output)
End Combining both output
Fig 4.2 Overall Flowchart
56 4.4.2 Image Acquisition and Filtering
The image acquisition and filtering process has shown in Fig 4.3. This flow chart demonstrate the Process of gathering image and further filtering it. At first we take the RGB color images then convert it to 2D i.e. grey images to process. The image normalized and segmentaized using python functions with the help of NumPy. Then after thresholding the image, we prepare It for edge detection.
Start Taking RGB Image
Converting to 2-D Space
Normalization
Image Segmentation
Orientation of Image Estimation
Frequency Image Estimation
End Thresholding
Fig 4.3 Image Acquisition and Filtering process
57
Start Applying Gaussian Filter Foreground Refining
Applying Gradient on X and Y Axis
Intensity of Gradient
Non-maximal Suppression
Applying Hysteresis
Calculating Primary Edge s
Sobel Operator
Fig 4.4 Canny Edge Detection
58
Primary Edge Image
Applying Percentage Blur
Processed Gradient Image
Remapping Image Pixels
Thresholding Image
Corrected Edge Output Sobel Operator
End
Fig 4.5 Sobel Operator
59 4.4.3 Edge Detection
To perform edge detection, first we have applied Gaussian filter over the image to reduce the noises from the converted 2D image as shown in Fig 4.4. This will help in the foreground refining of the image. Then we have applied gradient on both X and Y axis. By changing the intensity of the gradiants, it helps us to generate a perfectly edge detected image. we have used hysteresis over the image to get the value on straight lines over the image and eliminate the discontinues places.
Then this output has feeded though the Sobel operator as shown in Fig 4.5. Sobel operator helps us to remove the unwanted edge lines such as background. Sobel operator apply Gaussian blur over the image that thin out the non-deep edge lines. Again we apply gradient and remap the picture elements. This we get the final image from the thresholding and correcting the edge output.
4.4.4 Tracking Edge
The image tracking has done to provide a continuous path to the autonomous vehicle. After getting the output from the Sobel operator, we have used second order polynomial to generate curve fitting line over the original image. That helped us to generate a template how a road lane should be. We have used distance transformation and motion transformation over the images to detect the separate lines in the middle. The middle are not continuous hence it is required to track it seamlessly. After determining peaks, the system we need to detect line by drawing lines. The extracted line segments in the image associated with particular bins in a Hough transform. The return value lines is a structure array whose length equals the number of merged line segments found as shown in Fig 4.6. The data then feed though the CNN. CNN helps us to classify the inputs and outputs according the constructive neurons. It can identify data more precisely rather than any other algorithm it has capability to segmented the inputs and outputs.
4.4.5 Feeding Edge Data to CNN
Each image the CNN processes results in a vote. The amount of wrongness in the vote, the error, tells us how good our features and weights are. The features and weights can then be adjusted to make the error less. Each value is adjusted a little higher and a little lower, and the new error computed each time. Whichever adjustment makes the error less is kept. After doing this for every
60
feature pixel in every convolutional layer and every weight in every fully connected layer, the new weights give an answer that works slightly better for that image. This is then repeated with each subsequent image in the set of labeled images as shown in Fig 4.7.
Using 2nd Order Polynomial
Distance Transformation
Motion Transformation
Template Execution
Region Update Feeding Data
to CNN
End Start
Input Corrected Edge Data
Overlaying with Original Image
Converting 2D color Space to 3D color Space
Fig 4.6 Tracking Edge Algorithm
61
Feeding Data to CNN
Image Template
First Input Data Classification Layer
Proofing Data (With Respect to Trained Database)
Weighting Data
Data Classification Proofing Data
(With Respect to Labeled
Data)
First Output Data Filter
End
Fig 4.7 Feeding Data Through CNN
62
Quirks that occur in a single image are quickly forgotten, but patterns that occur in lots of images get baked into the features and connection weights. The output depends on the variety of labeled image. It distributes the output depending the weight of the each images with compared to the labeled images that has used in the training.
63