• Tidak ada hasil yang ditemukan

Future Work

Dalam dokumen UNIVERSITI TUNKU ABDUL RAHMAN (Halaman 97-112)

Due to the fact that the FER design has significant limitations, future work on this system will focus on the facial expression recognition training process, which will employ a teacher-student learning approach to incorporate detailed design in the ResNet classifier's neural network segmentation, as well as tracking and fusion strategies to achieve the best possible balance of prediction accuracy and processing speed.

Additionally, the android application is constrained to a one-second interval for reducing the recognition cost on third-party because it is not integrated with the teacher- student machine learning model, which would impose a disproportionate computational load on mobile devices. As a result, a novel training procedure, such as cloud-based training, could be investigated to alleviate the computational load. However, training the model on the cloud may be costly as well. Therefore, before implementing this approach, a cost-effectiveness analysis of the recognition service versus training a FER classifier on the cloud must be conducted. A trade-off must be made between high responsiveness and recognition cost. Last but not least, another improvement that could be made is the comparison of subsequent camera frames. This concept originated as a result of comparing newly created expressions to previously cached expressions. Rather than comparing with all recognized expressions, which takes longer as the number of recognized expressions increases, comparing subsequent camera frames effectively determines whether the expression has changed, saving computational cost by not recognizing the same expression.

To summarise, the designed FER implementation met each and every one of the listed objectives, including its ability to accelerate the recognition process, reduce the cost of recognition, and incorporate the FER design into the Android application. The developed FER design would be extremely beneficial not only for our target users who have difficulty recognizing other people's facial expressions, but it would also be

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

84 beneficial for people who communicate online and need to recognize other people's facial expressions.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

85 References

[1] C. Sumathi, T. Santhanam and M. Mahadevi, "AUTOMATIC FACIAL EXPRESSION ANALYSIS," International Journal of Computer Science &

Engineering Survey (IJCSES), vol. 3, p. 47, 2012.

[2] D. Y. Liliana and T. Basaruddin, "Review of Automatic Emotion Recognition Through Facial Expression Analysis," 2018 International Conference on Electrical Engineering and Computer Science (ICECOS), pp. 231-236, 2018.

[3] S. Turabzadeh, H. Meng, R. M. Swash, M. Pleva and J. Juhar, "Real-time emotional state detection from facial expression on embedded devices," 2017 Seventh International Conference on Innovative Computing Technology (INTECH), pp. 46-51, 2017.

[4] A. John, A. MC, A. S. Ajayan, S. Sanoop and V. R. Kumar, "Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction," 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), pp. 1328-1333, 2020.

[5] M. Suk and B. Prabhakaran, "Real-Time Mobile Facial Expression Recognition System -- A Case Study," 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 132-137, 2014.

[6] P. Leijdekkers, V. Gay and F. Wong, "CaptureMyEmotion: A mobile app to improve emotion learning for autistic children using sensors," Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, pp.

381-384, 2013.

[7] V. Gay and P. Leijdekkers, "Design of emotion-aware mobile apps for autistic children," Health and Technology, vol. 4, no. 1, pp. 21-26, 2014.

[8] B. Anand, B. B. Navathe, S. Velusamy, H. Kannan, A. Sharma and V.

Gopalakrishnan, "Beyond touch: Natural interactions using facial expressions,"

2012 IEEE Consumer Communications and Networking Conference (CCNC), pp.

255-259, 2012.

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

86 [9] Affectiva, "Affectiva announces availability of Emotion as a Service, a new data

solution, and version 2.0 of its emotion-sensing SDK," 2015. [Online]. Available:

https://www.prnewswire.com/news-releases/affectiva-announces-availability- of-emotion-as-a-service-a-new-data-solution-and-version-20-of-its-emotion- sensing-sdk-300139001.html. [Accessed 25 November 2020].

[10] M. Magdin, L. Benko, M. Kohútek and Š. Korprda, "Using the SDK Affdex for a Complex Recognition System Based on a Webcam," 2019 17th International Conference on Emerging eLearning Technologies and Applications (ICETA), pp.

499-504, 2019.

[11] P. Sun-ho, Director, Business Proposal. [Film]. Korea: Kross Pictures, 2022.

[12] V. Narayanan, "Tutorial — Image Classifier using Resnet50 Deep Learning model (Python Flask in Azure)," Medium, 13 October 2019. [Online]. Available:

https://medium.com/@venkinarayanan/tutorial-image-classifier-using-resnet50- deep-learning-model-python-flask-in-azure-4c2b129af6d2. [Accessed 06 April 2022].

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-1 APPENDIX A Live Facial Expression Recognition

A.1 Android Application Development Codes MainActivity.java

package com.example.fer;

import android.content.Intent;

import android.os.Bundle;

import android.util.Log;

import android.widget.Button;

import androidx.appcompat.app.AppCompatActivity;

import org.opencv.android.OpenCVLoader;

public class MainActivity extends AppCompatActivity { static {

if(OpenCVLoader.initDebug()){

Log.d("MainActivity: ","Opencv is loaded");

} else {

Log.d("MainActivity: ","Opencv failed to load");

} }

@Override

protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState);

setContentView(R.layout.activity_main);

Button camera_button = findViewById(R.id.camera_button);

camera_button.setOnClickListener(v -> startActivity(

new

Intent(MainActivity.this,LiveCamera.class).addFlags(Intent.FLAG_ACT IVITY_CLEAR_TASK | Intent.FLAG_ACTIVITY_CLEAR_TOP))

);

} }

LiveCamera.java

package com.example.fer;

import android.Manifest;

import android.app.Activity;

import android.content.pm.PackageManager;

import android.os.Bundle;

import android.os.Handler;

import android.speech.tts.TextToSpeech;

import android.util.Log;

import android.view.SurfaceView;

import androidx.annotation.NonNull;

import androidx.core.app.ActivityCompat;

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-2

import org.opencv.android.BaseLoaderCallback;

import org.opencv.android.CameraBridgeViewBase;

import org.opencv.android.LoaderCallbackInterface;

import org.opencv.android.OpenCVLoader;

import org.opencv.core.CvType;

import org.opencv.core.Mat;

import java.io.IOException;

import java.util.Locale;

public class LiveCamera extends Activity implements CameraBridgeViewBase.CvCameraViewListener2 {

private static final String TAG = "facial_expression";

Mat mRGBA=null;

Mat output;

boolean processed;

CameraBridgeViewBase cameraBridgeViewBase;

private facialExpressionRecognition fer;

private TextToSpeech mtts;

public void sound(String emotion){

mtts.speak(emotion, TextToSpeech.QUEUE_FLUSH, null,null);

Log.d("TTSlog", "speak: "+emotion);

}

BaseLoaderCallback baseLoaderCallback = new BaseLoaderCallback(this) {

@Override

public void onManagerConnected(int status) throws IOException {

switch (status) {

case LoaderCallbackInterface.SUCCESS: { Log.i(TAG, "onManagerConnected: OpenCV loaded");

// this will display FPS on the screen

cameraBridgeViewBase.setMaxFrameSize(720, 480);

cameraBridgeViewBase.enableFPSMeter();

cameraBridgeViewBase.enableView();

Log.i(TAG, "onManagerConnected: Camera Setting"

+ cameraBridgeViewBase.getWidth() + "*" + cameraBridgeViewBase.getHeight());

}

default: {

super.onManagerConnected(status);

} } } };

@Override

protected void onResume() { super.onResume();

if (OpenCVLoader.initDebug()) { // if loaded successful

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-3

Log.d(TAG, "onResume: OpenCV initialized");

try {

baseLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCE SS);

try {

fer = new

facialExpressionRecognition(LiveCamera.this);

Log.d(TAG, "model loading done");

} catch (IOException e) { e.printStackTrace();

}

} catch (IOException e) { e.printStackTrace();

} } else {

Log.d(TAG, "onResume: OpenCV not initialized");

OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, this, baseLoaderCallback);

} }

Handler handler = new Handler();

// Define the code block to be executed

private final Runnable runnableCode = new Runnable() { @Override

public void run() { if(mRGBA!=null) {

// Do something here on the main thread output = fer.AWSRecognize(mRGBA);

Log.d("Handlers", "Predicted");

processed = true;

sound(fer.ReturnEmotion());

}

handler.postDelayed(this, 1000);

} };

@Override

protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState);

ActivityCompat.requestPermissions(LiveCamera.this, new String[]{Manifest.permission.CAMERA}, 1);

setContentView(R.layout.activity_live_camera);

cameraBridgeViewBase = (CameraBridgeViewBase) findViewById(R.id.CamSurface);

cameraBridgeViewBase.setVisibility(SurfaceView.VISIBLE);

cameraBridgeViewBase.setCvCameraViewListener(this);

processed=false;

handler.post(runnableCode);

}

@Override

public void onRequestPermissionsResult(int requestCode,

@NonNull String[] permissions, @NonNull int[] grantResults) { // if request is denied, this will return an empty array if (requestCode == 1) {

if (grantResults.length > 0 && grantResults[0] ==

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-4

PackageManager.PERMISSION_GRANTED) {

cameraBridgeViewBase.setCameraPermissionGranted();

} //permission denied }

}

@Override

protected void onPause() { super.onPause();

if (cameraBridgeViewBase != null) { cameraBridgeViewBase.disableView();

}

if(mtts !=null){

mtts.stop();

mtts.shutdown();

} }

@Override

protected void onDestroy() { super.onDestroy();

if (cameraBridgeViewBase != null) { cameraBridgeViewBase.disableView();

}

if(mtts !=null){

mtts.stop();

mtts.shutdown();

} }

@Override

public void onCameraViewStarted(int width, int height) { mtts=new TextToSpeech(LiveCamera.this, status -> { if (status==TextToSpeech.SUCCESS){

int result=mtts.setLanguage(Locale.ENGLISH);

if(result == TextToSpeech.LANG_MISSING_DATA ||

result== TextToSpeech.LANG_NOT_SUPPORTED){

Log.d("TTSlog", "Language not supported" );

} }else{

Log.d("TTSlog", "Initialization failed");

}

Log.d("TTSlog", "Initialization success");

});

mRGBA = new Mat(height, width, CvType.CV_8UC4);

output = new Mat(height, width, CvType.CV_8UC4);

}

@Override

public void onCameraViewStopped() { mRGBA.release();

if(mtts !=null){

mtts.stop();

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-5

mtts.shutdown();

} }

@Override

public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {

mRGBA = inputFrame.rgba();

if(!processed) {

output=fer.drawBox(mRGBA);

}

processed=false;

return output;

}

}

facialExpressionRecognition.java

package com.example.fer;

import android.content.Context;

import android.graphics.Bitmap;

import android.util.Log;

import com.amazonaws.auth.AWSCredentials;

import com.amazonaws.auth.BasicAWSCredentials;

import com.amazonaws.regions.Region;

import com.amazonaws.regions.Regions;

import com.amazonaws.services.rekognition.AmazonRekognition;

import com.amazonaws.services.rekognition.AmazonRekognitionClient;

import com.amazonaws.services.rekognition.model.Attribute;

import com.amazonaws.services.rekognition.model.DetectFacesRequest;

import com.amazonaws.services.rekognition.model.DetectFacesResult;

import com.amazonaws.services.rekognition.model.Emotion;

import com.amazonaws.services.rekognition.model.FaceDetail;

import com.amazonaws.services.rekognition.model.Image;

import org.opencv.android.Utils;

import org.opencv.core.Core;

import org.opencv.core.Mat;

import org.opencv.core.MatOfRect;

import org.opencv.core.Point;

import org.opencv.core.Rect;

import org.opencv.core.Scalar;

import org.opencv.core.Size;

import org.opencv.imgproc.Imgproc;

import org.opencv.objdetect.CascadeClassifier;

import java.io.ByteArrayOutputStream;

import java.io.File;

import java.io.FileOutputStream;

import java.io.IOException;

import java.io.InputStream;

import java.nio.ByteBuffer;

import java.util.List;

import java.util.Objects;

import java.util.StringJoiner;

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-6

public class facialExpressionRecognition {

//now define cascadeClassifier for face detection private CascadeClassifier cascadeClassifier;

private expression[] emotion_list=null;

// call in the main activity

private AmazonRekognition rekognitionClient;

facialExpressionRecognition(Context context) throws IOException{

// now we will load Haar Cascade classifier try{

//define input stream to read classifier InputStream is=

context.getResources().openRawResource(R.raw.haarcascade_frontalfac e_alt);

//create a folder File

cascadeDir=context.getDir("cascade",Context.MODE_PRIVATE);

//now create a new file in that folder File mCascadeFile=new

File(cascadeDir,"haarcascade_frontalface_alt");

// now define output stream to transfer data to file we created

FileOutputStream os=new FileOutputStream(mCascadeFile);

//now create buffer to store byte byte[] buffer = new byte[4096];

int byteRead;

// read byte in while loop

// when it read -1 that means no data to read while ((byteRead=is.read(buffer))!=-1){

//writing byte on mCascade file os.write(buffer,0,byteRead);

}

//close input and output stream is.close();

os.close();

cascadeClassifier=new

CascadeClassifier(mCascadeFile.getAbsolutePath());

// if cascade file is loaded print

Log.d("facial_expression", "Classifier is loaded");

// IAM user -- amplify-fer AWSCredentials credentials = new BasicAWSCredentials("AKIAWKKRIJIYP4YK6HVW",

"7iRQ3IGnnVUFD+hl1ItC5i/d0QD9iTm94ydu9fmr");

rekognitionClient = new AmazonRekognitionClient(credentials);

rekognitionClient.setRegion(Region.getRegion(Regions.AP_SOUTHEAST_1 ));

}catch (IOException e){

e.printStackTrace();

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-7

} }

private DetectFacesResult output;

public Mat AWSRecognize(Mat mat_image){

// before predicting, our image is not properly align

// we have to rotate it by 90 degrees for proper prediction Mat a=mat_image.t();

Core.flip(a,mat_image,1);

a.release();//rotate by 90 degree //convert to gray scale

Mat grayscaleImage=new Mat();

Imgproc.cvtColor(mat_image,grayscaleImage,Imgproc.COLOR_RGBA2GRAY);

// set height and width

// define height and width of original frame

// define minimum height of face in original image // if smaller than this size, it will not proceed int minFaceSize=150;

// now create MatofRect to store face MatOfRect faces=new MatOfRect();

//check if cascadeClassifier is loaded or not if(cascadeClassifier!=null){

//detect face from the frame input output

cascadeClassifier.detectMultiScale(grayscaleImage,faces,1.1,2,2, new Size(minFaceSize,minFaceSize),new Size());

//minimum size }

// convert it to array

Rect[] faceArray=faces.toArray();

emotion_list=new expression[faceArray.length];

// loop all the face

for (int i=0; i<faceArray.length;i++){

// drawing rectangle

// input/output starting and ending point color thickness

Imgproc.rectangle(mat_image,faceArray[i].tl(),faceArray[i].br(),new Scalar(0,255,0),2);

//crop face from original frame and grayscaleImage // starting x starting y

Rect roi=new

Rect((int)faceArray[i].tl().x,(int)faceArray[i].tl().y, ((int)faceArray[i].br().x)-

((int)faceArray[i].tl().x),

((int)faceArray[i].br().y)- ((int)faceArray[i].tl().y));

Mat cropped_rgba=new Mat(mat_image,roi);

// resize the mat here Bitmap bitmap2

=Bitmap.createBitmap( cropped_rgba.width(), cropped_rgba.height(), Bitmap.Config.ARGB_8888);

Utils.matToBitmap(cropped_rgba,bitmap2);

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-8

// convert bitmap to byte array

ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();

bitmap2.compress(Bitmap.CompressFormat.JPEG, 100, byteArrayOutputStream);

byte[] byteArray = byteArrayOutputStream.toByteArray();

// convert byte array to byte buffer

ByteBuffer byteBuffer = ByteBuffer.wrap(byteArray);

DetectFacesRequest request = new DetectFacesRequest() .withImage(new Image().withBytes(byteBuffer)) .withAttributes(String.valueOf(Attribute.ALL));

Runnable objRunnable = () -> output = rekognitionClient.detectFaces(request);

Thread objbgthread = new Thread(objRunnable);

objbgthread.start();

String face_emotion= "";

try {

objbgthread.join();

List<FaceDetail> faceDetails = output.getFaceDetails();

Log.d("AWS Log","Request sent");

for (FaceDetail face : faceDetails) { double face_emotion_confidence = 0.0;

for(Emotion emotion : face.getEmotions()) { if (emotion.getConfidence() >=

face_emotion_confidence) {

face_emotion_confidence = emotion.getConfidence();

face_emotion = emotion.getType();

} } }

String emotion_s=face_emotion;

emotion_list[i]=new expression(faceArray[i],emotion_s);

// now put text on original frame Imgproc.putText(mat_image,emotion_s, new

Point((int)faceArray[i].tl().x+10,(int)faceArray[i].tl().y+20), 1,1.5,new Scalar(0,0,255),2);

} catch (InterruptedException e) { e.printStackTrace();

}

}

//rotate -90 degree after prediction Mat b=mat_image.t();

Core.flip(b,mat_image,0);

b.release();

return mat_image;

}

public Mat drawBox(Mat mat_image){

if(Objects.isNull(emotion_list)) return mat_image;

if(emotion_list.length==0)

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-9

return mat_image;

Mat a=mat_image.t();

Core.flip(a,mat_image,1);

a.release();//rotate by 90 degree // loop all the face

for (int i=0; i<emotion_list.length;i++){

if(Objects.isNull(emotion_list[i])) continue;

Imgproc.rectangle(mat_image,emotion_list[i].getCoordinates().tl(),e motion_list[i].getCoordinates().br(),new Scalar(0,255,0),2);

Imgproc.putText(mat_image,emotion_list[i].getEmotion(), new

Point((int)emotion_list[i].getCoordinates().tl().x+10,(int)emotion_

list[i].getCoordinates().tl().y+20),

1,1.5,new Scalar(0,0,255),2);

}

//rotate -90 degree after prediction Mat b=mat_image.t();

Core.flip(b,mat_image,0);

b.release();

return mat_image;

}

//from FacialExpressionRecognition class public String ReturnEmotion(){

StringJoiner joiner = new StringJoiner("");

for(int i = 0; i < emotion_list.length; i++) { joiner.add(emotion_list[i].getEmotion());

}

return joiner.toString();

} }

expression.java

package com.example.fer;

import org.opencv.core.Rect;

public class expression{

private Rect coordinates;

private String emotion;

public expression(Rect coordinates,String emotion){

this.coordinates=coordinates;

this.emotion=emotion;

}

public Rect getCoordinates(){

return this.coordinates;

}

public String getEmotion(){

return this.emotion;

} }

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-10 A.2 Python Codes of Expression Comparison with Cached Images

Demonstration of Cached Images by reading the images and their labels into array

import cv2

from IPython.display import clear_output import glob

from wand.image import Image cache, labels = [], []

for label in glob.glob('same_person/*'):

#print(label[12:]+" class")

for image in glob.glob(label+"/*"):

cache.append(Image(filename=image)) labels.append(label[12:])

Comparing with video frame

import cv2

import pandas as pd

from IPython.display import clear_output import glob

from wand.image import Image import boto3

face_cascade =

cv2.CascadeClassifier('C:/Users/User/Documents/Study/FYP/Python evaluation/opencv-

master/data/haarcascades/haarcascade_frontalface_alt2.xml') weimun19_1utar_rekognition =

boto3.client('rekognition',aws_access_key_id='xxxxx', aws_secret_access_key='xxxxxxx',

region_name='ap-southeast-1') def rescale_frame(frame, percent=75):

width = int(frame.shape[1] * percent/ 100) height = int(frame.shape[0] * percent/ 100) dim = (width, height)

return cv2.resize(frame, dim, interpolation =cv2.INTER_AREA)

aws=0 local=0

cap = cv2.VideoCapture('video/sample2.mp4') FPS=int(cap.get(cv2.CAP_PROP_FPS))

# Using cv2.putText() method cur_frame=0

box = pd.DataFrame(columns=['x','y','w','h','Emotion']) while(True):

ret, frame = cap.read()

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-11

if not ret:

break cur_frame+=1

resize = rescale_frame(frame, percent=25) if cur_frame % FPS == 0:

box.drop(box.index, inplace=True)

gray = cv2.cvtColor(resize, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.1, 4) for (x, y, w, h) in faces:

tempdict={'x':[],'y':[],'w':[],'h':[],'Emotion':[]}

tempdict['x'].append(x) tempdict['y'].append(y) tempdict['w'].append(w) tempdict['h'].append(h)

cv2.rectangle(resize, (x, y), (x+w, y+h), (0, 0, 255), 5) faces = frame[y:y + h, x:x + w]

with Image.from_array(faces) as img:

for images in cache:

clear_output(wait=True)

print("comparing with "+labels[cache.index(images)]) img.fuzz = img.quantum_range * 0.20 # Threshold of 20%

result_image, result_metric =

img.compare(images,'structural_similarity')#differences #print(result_metric)

if(result_metric>0.9):

local+=1

emotion="Comparison: "+labels[cache.index(images)]

break

elif(cache.index(images)==len(cache)-1):

#Call AWS Rekognition

ret, buf = cv2.imencode('.jpg', faces) rekognition_response =

weimun19_1utar_rekognition.detect_faces(Image={'Bytes':buf.tobytes()}

, Attributes=['ALL'])

results=rekognition_response.get('FaceDetails') emotion='Call AWS : Not Detected'

for result in results:

face_emotion_confidence = 0 face_emotion = None

for emotion_result in result.get('Emotions'):

if emotion_result.get('Confidence') >=

face_emotion_confidence:

face_emotion_confidence = emotion_result['Confidence']

face_emotion = emotion_result.get('Type') print(face_emotion)

Bachelor of Computer Science (Honours)

Faculty of Information and Communication Technology (Kampar Campus), UTAR

A-12

emotion='Call AWS : '+face_emotion aws+=1

cv2.putText(img=resize, text=emotion, org=(x+15, y-15), fontFace=cv2.FONT_HERSHEY_TRIPLEX, fontScale=0.8, color=(0, 255, 0),thickness=1)

print(emotion)

tempdict['Emotion'].append(emotion)

tempdf = pd.DataFrame.from_dict(tempdict) box=pd.concat([box,tempdf], ignore_index=True) del tempdict, tempdf

else:

if len(box)>0:

for i in range(len(box)):

cv2.rectangle(resize, (box.iloc[i]['x'], box.iloc[i]['y']), (box.iloc[i]['x']+box.iloc[i]['w'],

box.iloc[i]['y']+box.iloc[i]['h']), (0, 0, 255), 5)

cv2.putText(img=resize, text=box.iloc[i]['Emotion'],

org=(x+15, y-15), fontFace=cv2.FONT_HERSHEY_TRIPLEX, fontScale=0.8, color=(0, 255, 0),thickness=1)

cv2.imshow('Teacher-Student Learning Model',resize) key = cv2.waitKey(1)

if key == ord('q'):

break

if key == ord('p'):

cv2.waitKey(-1) #wait until any key is pressed cap.release()

cv2.destroyAllWindows()

print("Number of AWS request Reduced:"+str(local)+" from

"+str(local+aws)+"("+str(local/(local+aws))+")")

A.3 Python Codes of Teacher-Student Machine Learning Model

Dalam dokumen UNIVERSITI TUNKU ABDUL RAHMAN (Halaman 97-112)