• Tidak ada hasil yang ditemukan

Experimental design

Dalam dokumen UNIVERSITI TUNKU ABDUL RAHMAN (Halaman 73-82)

Chapters 5.1.1–5.1.4 discuss the evaluation of down sampling and spatial trimming, while chapter 5.1.5 discusses the evaluation of the trained ResNet50 model. Finally, chapter 5.1.6 discusses the evaluation of the effect of both caching methodologies on the number of AWS requests.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

60 The evaluation of down sampling and spatial trimming includes four stages. The first stage is to measure the time elapsed for every step in the preprocessing, starting from converting the image to grayscale, face detection, and face cropping. After measuring the time for image preprocessing, the time required to access the AWS rekognition service using images with different image processing techniques implemented is also measured, along with the bandwidth measurement and accuracy evaluation. The time measurement and accuracy evaluation align with the objectives to improve the overall processing time to access the FER service and maintain the accuracy of FER. On the other hand, the bandwidth measurement is to reduce the bandwidth required to send the FER request and save the bandwidth for the user.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

61 5.1.1 Measuring the Time of Image Processing

Start

tolerannce=10 prev_loss=0 gray_average=0 gray_accumulate=0

total_count=0 loop=1

True

count=0 gray_total=0

Not end of Dataset?

Read Image from directory True

Record the start time

Convert image to grayscale

Record the end time gray_total=gray_total+(end-start)

count=count+1

False

gray_accumulate=(gray_total)+gray_accumulate total_count=total_count+count new_gray_average=gray_accumulate/(total_count) current_loss=abs(gray_average-new_gray_average) Append the count, average time and average loss into dataframe

Loop>1 && prev_loss!=0 &&

current_loss<tolerance False

prev_loss=current_loss gray_average=new_gray_average

prev_loss=current_loss gray_average=new_gray_average

True

End

Export dataframe into CSV file

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

62 Figure 5.1.1 Flow of Measuring the Time of Image Processing

The steps in the flowchart above illustrate how to calculate the processing time required when converting a picture to grayscale. An iterative do-while loop is used to complete the total processing time measurement procedure, and processing all of the photos in the dataset will be considered one loop. The entire amount of time spent processing each picture will be added together at the end of each loop, and the total number of processed photos will also be recorded. After processing all of the photos (in a single loop), the entire amount of processing will be divided by the total number of images processed to get an average processing time to convert the image into grayscale.

Besides, the total amount of processing time and the number of processed images will be accumulated in each loop. In contrast, the average accumulated processing time in each loop will be recorded to be used in comparing the loss in average processing time in the next loop, determining the stopping criteria of this loop. It will be necessary to run the loop at least twice before it is stopped when the difference between the average cumulative processing time in two successive loops is less than 10 microseconds. This will guarantee that the average processing time is dependable and consistent. In each loop, the average processing time, the number of operations, and the loss in average will be appended to a DataFrame structure, and the DataFrame will be exported into a CSV file for analysis purposes at the end of the whole process.

The flowchart shows the flow for measuring the processing time to convert the image into grayscale. However, the same logic is also applied to measure the processing time to detect a face and crop face by measuring the time when performing the image processing.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

63 5.1.2 Measuring the Time Elapsed to Access AWS Rekognition

Start

time_tolerannce=10 prev_time_loss=0

time_average=0 time_accumulate=0

total_count=0 loop=1

True

total_time=0 count=0

Not end of Dataset?

Read Image from directory True

Record the start time

Send the image to the AWS Rekognition and obtain the result

Record the end time total_time=total_time+(end-start)

count=count+1

False

time_accumulate=(total_time)+time_accumulate total_count=total_count+count new_time_average=time_accumulate/(total_count) current_time_loss=abs(time_average-new_time_average) Append the count, average time and average loss into dataframe

Loop>1 && prev_time_loss!=0 &&

current_time_loss<time_tolerance False

prev_time_loss=current_time_lost time_average=new_time_average

prev_time_loss=current_time_loss time_average=new_time_average

True

End

Export dataframe into CSV file

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

64 Figure 5.1.2 Flow of Measuring the Time Elepased to Access AWS Rekognition

The flowchart above shows how to figure out how long it will take to access the AWS rekognition service. The AWS processing time measurement method likewise uses an iterative do-while loop, with one loop consisting of submitting all of the photos in the dataset to AWS rekognition. The entire time spent sending each picture to AWS rekognition and receiving the result, as well as the total number of processed photos, will be added together at the completion of each loop. The entire time spent will be divided by the total number of queries issued to produce an average time elapsed for accessing the AWS Rekognition after processing all of the images in a single loop.

Each loop will also keep track of the overall amount of time spent and the total number of requests made. Simultaneously, the average cumulative time elapsed in each loop will be recorded and compared to the average time elapsed in the next loop, therefore determining the loop's stopping criteria. Before being terminated, the loop must be run at least twice and only when the difference in average cumulative time elapsed between two succeeding loops is less than 10 microseconds. As a result, the average time elapsed will be consistent and dependable. The average time elapsed, the number of requests and average loss will be added to a DataFrame structure at the end of each loop, and the DataFrame structure will be exported into a CSV file for drawing helpful insight into the changes.

Using original colour images, grayscale images, face cropped images, and face cropped grayscale images, the same processes will be done to determine how long it takes to access the AWS Rekognition.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

65 5.1.3 Measuring the Bandwidth Required to Access AWS Rekognition

Start

bandwidth_tolerannce=50 prev_bandwidth_loss=0

bandwidth_average=0 bandwidth_accumulate=0

total_count=0 loop=1

True

total_bandwidth=0 count=0

Not end of Dataset?

Read Image from directory True

Record the number of bytes sent as the start bytes

Send the image to the AWS Rekognition and obtain the result

Record the number of bytes sent as the end bytes total_bandwidth=total_bandwidth+(end bytes-start bytes)

count=count+1

False

bandwidth_accumulate=total_bandwidth+bandwidth_accumulate total_count=total_count+count

new_bandwidth_average=bandwidth_accumulate/(total_count) current_bandwidth_loss=abs(bandwidth_average-

new_bandwidth_average)

Append the count, average bandwidth and average loss into dataframe

Loop>1 && prev_bandwidth_loss!=0 &&

current_bandwidth_loss<bandwidth_tolerance False

prev_bandwidth_loss=current_bandwidth_lost bandwidth_average=new_bandwidth_average

prev_bandwidth_loss=current_bandwidth_loss bandwidth_average=new_bandwidth_average

True

End

Export dataframe into CSV file

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

66 Figure 5.1.3 Flow of Measuring the Bandwidth Required to Access AWS

Rekognition

The flowchart above demonstrates how to measure the bandwidth used when accessing the AWS rekognition service. The bandwidth measurement approach still makes use of an iterative do-while loop, with one loop consisting of submitting all of the images in the dataset to AWS Rekognition. At the end of each loop, the total bandwidth used in submitting each image to AWS Rekognition, as well as the total number of requests, will be totaled together. After processing all of the images in a single loop, the total bandwidth used will be divided by the total number of requests sent to obtain an average bandwidth required for accessing the AWS Rekognition.

In each loop, the total bandwidth used and the total number of requests launched are kept in the record. On the other hand, the average cumulative bandwidth used in each loop will be recorded and compared to the average accumulated bandwidth used in the following loop, defining the stopping criterion for the loop. The loop must be executed at least twice before being terminated and only when the difference in average cumulative bandwidth used between two subsequent loops is less than 50 bytes. As a consequence, the average bandwidth used to access the AWS rekognition will be constant and reliable. At the end of each loop, the average bandwidth required, number of requests, and loss in average bandwidth required will be appended to a DataFrame structure, and the DataFrame structure will be exported into a CSV file to provide useful insight into the changes.

The same techniques will be used to determine the bandwidth required to access the AWS Rekognition using original colour images, grayscale images, face cropped images, and face cropped grayscale images.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

67 5.1.4 Measuring the Accuracy after Image Processing Implemented

start

Not end of Dataset?

True

Read image from directory

Send the image to AWS rekognition and obtain the result

Append the image file name and emotion identified by the AWS into dataframe

False

Export the dataframe into CVS file

End

Figure 5.1.4 Flow of Measuring the Accuracy Changes after Image Processing Techniques Implemented

Compared to the time and bandwidth measurement, the FER result returned by the AWS rekognition is constant. Therefore, it requires no looping to get the average FER result. Firstly, the grayscale images, face cropped images, and face cropped grayscale images will be sent to AWS rekognition and obtain the FER result. The recognition result and the image file name will be recorded for each request and eventually be exported into a CSV file.

To obtain the accuracy changes when sending the different types of images to AWS rekognition, the facial expression label done by author intuition and confirmed through sending the original images to AWS rekognition will be used benchmarking with other types of images. As the original images are chosen and filtered, the accuracy of AWS rekognition when sending the original images is 100%.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

68 5.1.5 Measuring the Accuracy for Trained ResNet50 Model

When training the ResNet50 model for predicting facial expression, the training accuracy, training loss, validation accuracy, and validation loss are all recorded. This information is used to determine whether the accuracy convergence of the model has occurred during the training process. In addition, the testing data split when splitting the base dataset (as described in chapter 4.5.1) is used in evaluating the final trained model by computing the testing accuracy with the confusion matrix plotted.

5.1.6 Measuring Reduced Requests with Caching Methodologies

As proposed, the caching methodologies were implemented with the goal of minimizing the number of AWS requests required by sending them only when the caching methodologies were unable to provide reliable recognition results. As a result, the number of AWS requests that can be reduced after implementing caching methodologies should also be evaluated.

A clip from the Business Proposed [11] will be used as the video input, and the video frame will be processed when the frame counter reaches a multiple of the video frame rate. This is to demonstrate the down sampling technique of time interval processing on the video input. The total number of video frames detected with at least one face will be calculated by adding the number of reliable recognitions obtained via caching methodologies and AWS Rekognition. Thus, the total number of AWS requests that are reduced can be determined, which demonstrates the effect of caching methodologies on the total number of AWS requests.

Dalam dokumen UNIVERSITI TUNKU ABDUL RAHMAN (Halaman 73-82)