• Tidak ada hasil yang ditemukan

BAB 5 PENUTUP

5.1 Saran64

Pada pengerjaan Tesis kali ini, masih diperlukan pengembangan dan penyempurnaan baik dari segi hardware maupun software, berikut merupakan saran untuk dijadikan acuan dalam mengembangkan alat ini:

1. Penggunaan dan pemanfaatan deteksi emosi dengan metode lain selain yang digunakan dalam penelitian ini

2. Mencari jenis emosi lain selain emosi netral yang cocok untuk klasifikasi emosi berdasarkan detak jantung dan konduktifitas kulit.

3. Penelitian ini diharapkan dapat dikembangkan hingga membentuk sebuah alat yang kompleks.

4. Penelitian ini diharapkan dapat disimpan dalam web atau dapat ditambahkan platform pendukung lainnya sehingga dapat digunakan untuk monitoring emosi anak secara berkala.

65 REFERENSI

[ 1]

T. M. McWhorter, Y. Ni, H. Nie, Y. Larve, A. J. A. Majumder dan D.

R. Ucci, “sEmoD: A Personalized Emotion Detection Using a Smart Holistic Embedded IoT System,” dalam IEEE 43rd Annual Computer Software and Application Conference( COMPSAC), Milwaukee, WI, USA, USA, 2019.

[ 2]

A. Nugraha dan Y. Rachmawati, Metode Pengembangan Sosial Emosional, Jakarta: Universitas Terbuka, 2008.

[ 3]

A. Mu'arifah, “Pentingnya Emosi Anak Dikendalikan,” Universitas Ahmad Dahlan, 16 September 2013. [Online]. Available:

https://uad.ac.id/id/pentingnya-emosi-anak-dikendalikan/. [Diakses 24 November 2021].

[ 4]

A. Mahabbati, “Identifikasi Anak dengan Gangguan Emosi dan Perilaku di Sekolah dasar,” Jurnal Pendidikan Khusus, vol. II, no. 2, p. 104, 2006.

[ 5]

S. Rijal, “Pengenalan Ekspresi Wajah Menggunakan Deep Learning,”

17 Juli 2020. [Online]. Available: http://repository.untag- sby.ac.id/id/eprint/3391. [Diakses 18 Feb 2022].

[ 6]

P. Rani, “Emotion Detection of Autistic Children using Image Processing,” dalam 2019 Fifth International Conference on Image Information Processing (ICIIP), Shimla, India, 2019.

[ 7]

N. Dewi dan F. Ismawan, “Implementasi Deep Learning Menggunakan Convolutional Neural Network untuk Sistem Pengenalan Wajah,” Faktor Exacta, vol. 14, no. 1, pp. 34-43, 2021.

[ 8]

M. Menard, H. Hamdi, P. Richard dan B. Dauce, “Emotion Recognition Based on Heart Rate and SKin Conductance,” dalam Science and Technology Publication, 2015.

[ 9]

H. h. Rachmat dan D. R. Ambaransari, “Sistem Perekam Detak Jantung Berbasis Pulse Heart Rate Sensor pada Jari Tangan,” ELKOMIKA, vol. VI, no. 3, pp. 344-356, 2018.

[ 10]

A. Cernian, A. Olteanu, D. Carstoiu dan C. Mares, “Mood Detector - On Using Machine Learning to Identify Moods and Emotion,” dalam International Conference on Control System and Computer Science, 2017.

[ 11]

K. Kosim, “Sistem Kontrol dan Prediksi Emosi Manusia Menggunakan Pulse Heart Rate Sensor Berbasis Android,” Universitas 17 Agustus, Surabaya, 2020.

66 [

12]

V. Jha, N. Prakash dan S. Sagar, “Wearable Anger Monitoring System,” dalam ICT Express, 2017.

[ 13]

T. Paul, C. Bhattacharyya, P. Sen, R. Prasad dan S. Shaw, “Human Emotion Recognition using GSR and EEG,” International Journal of Scientific and Research Publication, vol. X, no. 5, pp. 394-400, 2020.

[ 14]

R. Suciati, “UIN SUSKA RIAU,” 2014. [Online]. Available:

http://repository.uin-suska.ac.id/id/eprint/5883. [Diakses 24 November 2021].

[ 15]

Anonim, “Peran Emosi Positif dan Emosi Negatif,” Psikologi Universitas Muhammadiyah Purwokerto, 30 November 2020. [Online].

Available: https://psikologi.ump.ac.id/2020/11/30/peran-emosi-positif-dan- emosi-negatif/. [Diakses 2 Desember 2021].

[ 16]

D. Fiona, “5 Tahapan Perkembangan Emosi Anak SD-SMA, Wajib Tahu!,” orami, 19 July 2021. [Online]. Available:

https://www.orami.co.id/magazine/perkembangan-emosi-anak/. [Diakses 2 Desember 2021].

[ 17]

A. R. Khan, “Facial Emotion Recognition Using Conventional Machine Learning and Deep Learning Methods: Current Achievements, Analysis and Remaining Challanges,” MDPI, 2022.

[ 18]

K. VEMOU, A. HORVATH dan T. Zedrick, “Facial Emotion Recognition,” EDPS TechDispatch on Facial Emotion Recognition, no. 1, 2021.

[ 19]

M. N. A. Wahab, A. Nazir, A. T. Z. Ren, M. H. M. Noor, M. F.

Akbar dan A. S. A. Mohamed, “Efficientnet-Lite and Hybrid CNN-KNN Implementation for Facial Expression Recognition on Raspberry Pi,” IEEE, vol. 9, 2021.

[ 20]

D. Das, “Circuit Digest,” 8 May 2022. [Online]. Available:

https://circuitdigest.com/microcontroller-projects/how-max30102-pulse- oximeter-and-heart-rate-sensor-works-and-how-to-interface-with-arduino.

[Diakses 1 December 2022].

[ 21]

I. Efendi, “Pengertian dan Kelebihan Arduino,” IT-JURNAL.COM, 2018. [Online]. [Diakses 29 5 2023].

[ 22]

K. P. Sinaga dan M. S. Yang, “Unsupervised K-Means Clustering Algorithm,” IEEE Access, vol. 8, pp. 80716-80727, 2020.

[ 23]

I. G. Harsemadi dan I. M. Sudarma, “Penggolongan Musik Terhadap Suasana Hati Menggunakan Metode K-Means,” dalam Konferensi Nasional

67 Sistem dan Informasi, 2017.

[ 24]

E. Meirista, “Aplikasi Metode Support Vector Machine (SVM) untuk Klasifikasi Tanaman Berdaun Menjari dan Gulma Berdasarkan Fitur Bentuk dan Tekstur Daun,” ITS, Surabaya, 2015.

[ 25]

E. Listiana dan M. A. Muslim, “Penerapan Adaboost untuk Klasifikasi Support Vector Machine Guna Meningkatkan Akurasi pada Diagnosa Chronic Kidney Disease,” dalam SNATIF, 2017.

[ 26]

Suyanto, K. N. Ramdhani dan S. Mandala, Deep Learning Moderenisasi Machine Learning untuk Big Data, Bandung: Informatika Bandung, 2019.

[ 27]

K. R. Apriandi, “Mengenal Teknologi Deep Learning,”

CODEPOLITAN, 5 September 2017. [Online]. Available:

https://codepolitan.com/blog/mengenal-teknologi-deep-learning-dan- sejarahnya-59aaea44b5f64. [Diakses 6 Januari 2023].

[ 28]

A. Yusuf, R. C. Wihandika dan C. Dewi, “Klasifikasi Emosi Berdasarkan Ciri Wajah Menggunakan Convolutional Neural Network,”

Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer, vol. III, no.

11, 2019.

[ 29]

A. Mollahosseini, D. Chan dan M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” dalam IEEE, Lake Placid, NY, USA, 2016.

[ 30]

R. Rokhana, J. Priambodo, T. Karlita, I. M. G. Sunarya, E. M.

Yuniarno, I. K. E. Purnama dan M. H. Purnama, “Convolutional Neural Network untuk Pendeteksian Patah Tulang Femur pada Citra Ultrasonik B- Mode,” JNTETI, vol. VIII, no. 1, pp. 59-67, 2019.

[ 31]

R. Rokhana, W. Herulambang dan R. Indraswari, “Deep Convolutional Neural Network for Melanoma Image Classification,” dalam IES, Surabaya, 2020.

[ 32]

I. Alwi, “Kriteria Empirik dalam Menentukan Ukuran Sampel pada Pengujian Hipotesis Statistika dan Analisis Butir,” Jurnal Formatif, vol. 2, no. 2, pp. 140-148, 2015.

68

Halaman Sengaja Dikosongakan

69 LAMPIRAN

1) Foto ekspresi anak pada tiap kondisi yang diambil a. Saat diberi Stimulasi Video Senang

b. Saat diberi Stimulasi Video Sedih

70 c. Saat diberi permainan sulit

d. Contoh pengambilan data yang benar

71

2) Contoh Pengambilan Dataset Ekspresi Wajah Menggunakan Webcam a. Saat diberi flashcard Senang

b. Saat diberi flashcard Sedih

c. Saat diberi flashcard Marah

72 d. Saat diberi flashcard netral

3) Program Pengambilan Ekspresi Wajah import cv2

import os

import matplotlib.pylab as plt face_cascade=

cv2.CascadeClassifier('haarcascade/haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade/haarcascade_eye.xml') cap = cv2.VideoCapture(0)

face_crop = []

ambilData=0

while(cap.isOpened()):

ret, frame = cap.read() if ret == True:

newpath=r'D:\Dataset_Wajah_Anak\...' if not os.path.exists(newpath):

os.makedirs(newpath) wajahDir = newpath

gray = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) # warna

73

faces = face_cascade.detectMultiScale(gray, 1.3, 5) namaFile = '...' + '.' + str(ambilData) + '.jpg' cv2.imwrite(wajahDir + '/' + namaFile, frame) for (x, y, w, h) in faces:

cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2) roi_gray = frame[y:y+h, x:x+w]

resized = cv2.resize(roi_gray, (48, 48))

Abu = cv2.cvtColor(resized, cv2.COLOR_BGR2GRAY) namaFile = '...' + '.' + str(ambilData) + '.jpg'

cv2.imwrite(wajahDir + '/' + namaFile, Abu) fps=str(ambilData)

cv2.putText(frame, fps, (10,25),cv2.FONT_HERSHEY_SIMPLEX,1 , (0,255,255), 3, cv2.LINE_AA)

cv2.imshow('Pengumpulan Dataset', frame) cv2.waitKey(40)

ambilData += 1 plt.show()

k = cv2.waitKey(1) & 0xFF if k == 27 or k == ord('q'):

break

elif ambilData >= 10:

break

print("Pengambilan data selesai") cap.release()

cv2.destroyAllWindows()

74

4) Program Pengambilan Detak Jantung dan Konduktivitas Kulit

#include <Wire.h>

#include "MAX30105.h"

#include "heartRate.h"

#include "MAX30100_PulseOximeter.h"

#define REPORTING_PERIOD_MS 1000 /////////////////////////////////////////////////////////////////

MAX30105 particleSensor;

const byte RATE_SIZE = 4;

byte rates[RATE_SIZE]; //Array of heart rates byte rateSpot = 0;

long lastBeat = 0;

//////////////////////////////////////////////////////////////////

float beatsPerMinute;

int beatAvg;

////////////////////////////////////////////////////////////

int gsr;

float hr;

const int GSR = A0;

double sensorValue = 0;

double gsr_kalib;

double gsr_res;

double gsr_cond;

double gsr_out;

double gsr_average;

double centroid [5][5], dc0, dc1, dc2, dc3;

double data_gsr, data_hr, i;

//////////////////////////////////////////////

75 void setup() {

Serial.begin(9600);

if (!particleSensor.begin(Wire, I2C_SPEED_FAST)) //Use default I2C port, 400kHz speed

{

Serial.println("MAX30105 was not found. Please check wiring/power.

");

while (1);

}

particleSensor.setup();

particleSensor.setPulseAmplitudeRed(0x0A);

particleSensor.setPulseAmplitudeGreen(0);

}

void loop() {

//+++++++++++++++ Sensor HR +++++++++++++++//

long irValue = particleSensor.getIR();

if (checkForBeat(irValue) == true) {

//We sensed a beat!

long delta = millis() - lastBeat;

lastBeat = millis();

beatsPerMinute = 60 / (delta / 1000.0);

if (beatsPerMinute < 255 && beatsPerMinute > 80) {

rates[rateSpot++] = (byte)beatsPerMinute;

rateSpot %= RATE_SIZE; //Wrap variable //Take average of readings

76 beatAvg = 0;

for (byte x = 0 ; x < RATE_SIZE ; x++) beatAvg += rates[x];

beatAvg /= RATE_SIZE;

}

if (irValue < 50000) Serial.println();

//+++++++++++++++ Sensor GSR +++++++++++++++//

long sum = 0;

for (int i = 0; i < 10; i++) {

sensorValue = analogRead(GSR);

gsr_kalib = sensorValue - 133.001; //645 max

gsr_res = ((1024 + 2 * gsr_kalib) * 10000 / (512 - gsr_kalib));

gsr_cond = (float(1) / float(gsr_res));

gsr_out = 1000000 * gsr_cond;

delay(5);

gsr_average = gsr_out / 10;

} }

if (Serial.available()) {

char inchar = (char) Serial.read();

//++++++++++ Sensor GSR ++++++++++//

if (inchar == 'b') {

data_gsr = (gsr_average);

int Array[3] = {data_gsr, 0, 0};

Serial.println("b" + String(data_gsr));

}

77

//++++++++++ Sensor HR ++++++++++//

if (inchar == 'h') {

data_hr = (beatAvg);

int Array[3] = {0, data_hr, 0};

Serial.println("h" + String(data_hr));

}

//++++++++++ Metode K-Means ++++++++++

if (inchar == 'k') {

centroid [0][0] = 110.12796208530807;

centroid [0][1] = 5.483412322274882;

centroid [1][0] = 70.03883495145631;

centroid [1][1] = 2.5436893203883493;

centroid [2][0] = 77.13059701492537;

centroid [2][1] = 2.6641791044776117;

centroid [3][0] = 89.5137614678899;

centroid [3][1] = 3.4541284403669725;

hr = data_hr;

gsr = data_gsr;

dc0 = sqrt(pow((hr - centroid[0][0]), 2) + pow((gsr - centroid[0][1]), 2));

dc1 = sqrt(pow((hr - centroid[1][0]), 2) + pow((gsr - centroid[1][1]), 2));

dc2 = sqrt(pow((hr - centroid[2][0]), 2) + pow((gsr - centroid[2][1]), 2));

dc3 = sqrt(pow((hr - centroid[3][0]), 2) + pow((gsr - centroid[3][1]), 2));

78

if (dc1 < dc0 && dc1 < dc2 && dc1 < dc3) {

Serial.println("kSenang");

}

else if ( dc2 < dc0 && dc2 < dc3) {

Serial.println("kNetral ");

}

else if (dc3 < dc0 ) {

Serial.println("kSedih");

} else {

Serial.println("kMarah");

} }

//////////////////////////////////////////////////////////////

} }

79 5) Program Learning CNN

import time import cv2

from keras.models import Sequential

from keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten

from tensorflow.keras.optimizers import Adam

from keras.preprocessing.image import ImageDataGenerator import matplotlib as plt

# Initialize image data generator with rescaling

train_data_gen = ImageDataGenerator(rescale=1./255) validation_data_gen = ImageDataGenerator(rescale=1./255)

# Preprocess all test images

train_generator = train_data_gen.flow_from_directory(

'databrori/train', target_size=(48, 48), batch_size=64,

color_mode="grayscale", class_mode='categorical', shuffle=True)

# Preprocess all train images

validation_generator = validation_data_gen.flow_from_directory(

'databrori/test', target_size=(48, 48), batch_size=64,

80 color_mode="grayscale", class_mode='categorical', shuffle= True)

# create model structure emotion_model = Sequential()

#Feature Extraction#

emotion_model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48, 48, 1)))

emotion_model.add(MaxPooling2D(pool_size=(2, 2)))

#Layer2

emotion_model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) emotion_model.add(MaxPooling2D(pool_size=(2, 2)))

#Layer3

emotion_model.add(Conv2D(64, kernel_size=(3, 3),activation='relu')) emotion_model.add(MaxPooling2D(pool_size=(2, 2)))

#emotion_model.add(MaxPooling2D(pool_size=(2, 2))) emotion_model.add(Dropout(0.25))

emotion_model.add(Flatten())

#emotion_model.add(Dropout(0.5))

emotion_model.add(Dense(1024, activation='relu')) emotion_model.add(Dense(4, activation='softmax')) cv2.ocl.setUseOpenCL(False)

emotion_model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.0001, decay=1e-6), metrics=['accuracy']) StartTime=time.time()

81

# Train the neural network/model

emotion_model_info = emotion_model.fit_generator(

train_generator,

steps_per_epoch=16000 // 64, epochs=100,

validation_data=validation_generator, validation_steps=4000 // 64)

EndTime=time.time()

print("###### Total Time Taken: ", round((EndTime-StartTime)/60), 'Minutes ######')

# save model structure in jason file model_json = emotion_model.to_json()

with open("emotion_model_ori.json", "w") as json_file:

json_file.write(model_json)

# save trained model weight in .h5 file

emotion_model.save_weights('emotion_model_ori.h5')

6) Program Testing Model CNN import cv2

import numpy as np

from keras.models import model_from_json

emotion_dict = {0: "Angry", 1: "Happy", 2: "Neutral", 3: "Sad"}

json_file = open('emotion_model_2.json', 'r') loaded_model_json = json_file.read()

json_file.close()

emotion_model = model_from_json(loaded_model_json)

82

emotion_model.load_weights("emotion_model_2.h5") print("Loaded model from disk")

cap = cv2.VideoCapture(0) while True:

ret, frame = cap.read()

frame = cv2.resize(frame, (1280, 720)) if not ret:

break

face_detector =

cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_default.xml' )

gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

num_faces = face_detector.detectMultiScale(gray_frame, scaleFactor=1.3, minNeighbors=5)

for (x, y, w, h) in num_faces:

cv2.rectangle(frame, (x, y-50), (x+w, y+h+10), (0, 255, 0), 4) roi_gray_frame = gray_frame[y:y + h, x:x + w]

cropped_img =

np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), - 1), 0)

emotion_prediction = emotion_model.predict(cropped_img) maxindex = int(np.argmax(emotion_prediction))

cv2.putText(frame, emotion_dict[maxindex], (x+5, y-20), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2, cv2.LINE_AA) cv2.imshow('Emotion Detection', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):

83 break

cap.release()

cv2.destroyAllWindows()

7) Publikasi

Publikasi Seminar Internasional

Electrical Engineering, Computer Science, and Informatics

(EECSI)

84

Biodata Penulis

Nama : Fildzah Aure

Tempat/Tanggal Lahir : Surabaya / 24 November 1999

Alamat : Dreamingland D5 no 21

Telepon/Hp : 08113405726

Email : [email protected]

Motto : “When things get tough, look at the people who love you! You will get energy from them.”

Riwayat Pendidikan :

Tingkat Nama Sekolah Tahun

SD SDN Manukan Kawasan Surabaya 2005 – 2011

SMP SMPN 43 Surabaya 2011 – 2014

SMK SMA Trimurti Surabaya 2014 – 2017

PTN D4 Teknik Elektro Politeknik Elektronika

Negeri Surabaya 2017– 2021

PTN S2 Teknik Elektro Politeknik Elektronika

Negeri Surabaya (PENS) 2021-2023

Penulis telah mengikuti seminar Tesis Akhir pada tanggal 21 Juni 2023, sebagai salah satu persyaratan untuk memperoleh gelar Magister Terapan Teknik (M.Tr.T).

Dokumen terkait