• Tidak ada hasil yang ditemukan

Disaster Management Sentiment Analysis Using the BiLSTM Method Rachdian Habi Yahya

N/A
N/A
Protected

Academic year: 2023

Membagikan "Disaster Management Sentiment Analysis Using the BiLSTM Method Rachdian Habi Yahya"

Copied!
8
0
0

Teks penuh

(1)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 501

Disaster Management Sentiment Analysis Using the BiLSTM Method

Rachdian Habi Yahya*, Warih Maharani, Rifki Wijaya Faculty of Informatics, Telkom University, Bandung, Indonesia

Email: 1,*[email protected], 2[email protected], 3[email protected] Correspondence Author Email: [email protected]

Abstract−Indonesia is a country prone to natural disasters. Natural disasters occur due to the process of adjustment to changes in natural conditions due to human behavior or biological processes. Community responses through tweets on Twitter are crucial for decision-making and action in disaster management and recovery processes. From the many public reactions via Twitter, sentiment analysis can be carried out. Classification using the BiLSTM method can be carried out to determine the categories of positive and negative responses after previously being compared using the SVM, which resulted in an accuracy of 82.73% and a BERT of 81.78%. After the classification process, the testing process is carried out with Word2Vec. From a total of 2,686 Twitter data, it was concluded that there were around 2,081 positive sentiments and 605 negative sentiments related to disaster management in Indonesia. At the same time, the test results obtained accuracy reached 84%, precision 88%, recall 92%, and f1-score reached 90%.

Keywords: Sentiment Analysis; Twitter; BiLSTM; Word2vec; Flood Disaster; Earthquake

1. INTRODUCTION

Indonesia is nicknamed the "Ring of Fire", based on the country's geographical location where three tectonic plate points meet, namely the Indo-Australian plate, the Eurasian plate, and the Pacific plate. Therefore Indonesia is a country prone to natural disasters. Natural disasters are phenomena caused by a series of natural events such as earthquakes, tsunamis, volcanic eruptions, floods, droughts, tornadoes, and landslides.[1][2][4] This phenomenon occurs due to nature's efforts to balance ecosystems damaged by humans or due to natural processes.[2] According to BNPB data, in 2021, a total of 5,402 disasters occurred in Indonesia.[2][3] Of these, 1,794 events were floods, followed by 1,321 landslides, then 1,577 extreme weather events and 24 earthquakes, which claimed as many as 728 deaths throughout 2021.[2][3][7] The many natural disasters that have occurred in Indonesia have not been spared from people's tweets on social media Twitter.[1][7]

Twitter is a social media site developed by Twitter, Inc., with unique writing characteristics, formats, and symbols with special rules limited to 140 characters. The number of daily Twitter users is increasing every year.

According to a Statista report, there were 18.45 million active Twitter users in Indonesia as of January 2022. This achievement places Indonesia as the fifth country with the most Twitter users worldwide, followed by Brazil with 19.05 million. The United States occupies the highest position, with 76.9 million active users.[5] Twitter social media is often used to convey public opinion about specific topics so that this information becomes the latest trending topic. [5]

Many Twitter users make this social media a favourite for the digital community to discuss social studies such as public interest, social interaction, political sentiment, and many other public opinions. Apart from matters related to social studies, studies related to the field of disaster have also been carried out on Twitter social media, especially in Indonesia, as in research [1][4][7][10][25]. Studies related to disasters still need to be carried out in Indonesia, even though Indonesia is a country with a high level of disaster vulnerability. Indonesia is also a country that is very suitable for conducting research on disaster management with rapid technological, social and cultural advances.

Tweets related to natural disasters can be used as information to determine assistance according to the needs of victims.[7] Not only that, information obtained through public sentiment on Twitter can often speed up disaster recovery.[4] The public's sentiments, both from disaster victims and non-victims, through tweets on Twitter greatly determine the success of disaster management and the recovery process.[4][7] So sentiment analysis on data collected from Twitter is carried out to understand the community's response to disaster management that has been and will be carried out by the government. [4][7][10]

Sentiment Analysis is a study that analyzes human opinions, sentiments, evaluations, attitudes and emotions, which are applied in almost every business and social domain because opinions are the centre of almost all human activities.[9][10] Sentiment analysis is divided into two processes: Sentiment Extraction and Sentiment Classification. Sentiment Extraction is an aspect extraction process that has been evaluated. Sentiment classification groups various aspects, namely positive, negative, or neutral.[9][16][20] Sentiment Analysis is divided into two parts: the Coarse Grain Sentiment Analysis and the Fine Grain Sentiment Analysis. Coarse Grain Sentiment Analysis performs an analysis process at the document level. Sentiment grouping focuses on positive, negative or neutral documents as a whole. Meanwhile, Fine-grained Sentiment Analysis focuses on classifying sentiments in just one sentence.[9][10][13]

One of the methods used in sentiment analysis is BiLSTM with word2vec feature extraction and CBOW architecture.[6][16][17] The BiLSTM method is considered a good method for classifying data compared to other methods in calculating accuracy.[6][9][17] According to Dong, Y., Fu, Y., Wang, L., Chen, Y., Dong, Y., & Li, J.

(2)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 502 in their research which compared the results of the accuracy of several methods, BiLSTM produced an accuracy rate of 79.30% which is higher than CNN which produces an accuracy of 76.10%, SVM of 76.14%, and LSTM of 75.90%.[9] Aulia, R.I., Agus S., and Yohanes S. researched that the Bidirectional Long Short Term Memory (BiLSTM) method and word2vec extraction were used to detect hate speech with CBOW architecture with epoch 10, learning rate 0.001 and 200 neurons resulting in an accuracy of up to 94.66%, precision 99.08%, recall 93.74%

and F1-score 96.29%.[6] In Pratiwi, R. W., Sari, Y., & Suyanto, Y.'s research, it was also proven that the use of word2vec with the CBOW architecture and the addition of an attention layer to the Long Short Term Memory (LSTM) method obtained an accuracy of 78.16%. BiLSTM produced an accuracy of 79 .68%. While the FSW algorithm is 73.50% and 73.79% FWL.[8]

2. RESEARCH METHODOLOGY

The stages carried out in this study are depicted in Figure 1

Figures 1. Research Stages 2.1 Crawling Data

Crawling Data is extracting data on users and original tweets taken via Twitter with specific keywords. Data extraction can be done using the Twitter API with the tweepy library in the Python programming language. The tweets were only in Indonesian, with 2686 tweets on the keyword “Bencana”. The boundaries of the disaster in this study are floods and earthquakes that occurred in Indonesia.

2.2 Preprocessing

Preprocessing is an advanced stage for data cleaning, integration, transformation, and reduction. This stage is essential because if there are errors in the data, misleading information will occur. The data quality will be better if there is little noise. In order to obtain high-quality data, proper preprocessing understanding is required for each case study. The stages in preprocessing are data cleansing, case folding, tokenizing, stopword removal, normalization, and stemming. The preprocessing stage is done in Python via the NLTK library.

a. Data Cleansing

This stage is the stage of eliminating non-alphabetical characters to reduce noise. The removed characters are punctuation marks such as periods (.), commas (,), question marks (?), and exclamation points (!), as well as symbols such as the '@' sign for usernames, hashtags (#), emoticons, and website address[11]

b. Case Folding

Case folding is the stage for converting alphabetic characters through the cleansing stage to lowercase (lowercase). [11]

c. Tokenizing

This stage functions as a sentence breaker based on each compound word called a term or token. Spaces break down Tokenizing.[11]

d. Stopword Removal

The fourth step is stopword removal. In this process, sentences that have been tokenized before are rechecked, so words that have no meaning will be deleted. This step dramatically facilitates the process of sentiment analysis.[11]

e. Normalization

Next is the normalization process. Words processed through the stages of case-folding, tokenized, and stopword removal will be normalized again into sentences. However, in sentence formation at this stage, the series of words will become standard sentences but still have the same meaning. [11]

(3)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 503 f. Stemming

The final stage is stemming. The process of converting words into common words by looking for each affix to convert it into a basic dish. The goal is to maximize and optimize text processing. In this process, we use the Indonesian literary library in python.

2.3 Data Labelling

Data labeling is done to explain whether tweets from Twitter users are included in the negative or positive category.

Tweets pre-processed from the dataset will be categorized as either positive or negative. The labeling process is carried out using the text analytics method. From a total of 2686 tweets, 2081 were classified as positive tweets and 685 negative tweets.

2.4 Word2Vec

Word2Vec is a word embedding method for representing words into a vector of length N. For example, if a word

"dry", "hot", "rainy", or "cold", then the vector representation of "dry" will be close to "hot". , so also, the "rain"

vector will be adjacent to the "cold" vector[5][12]. Because the word2vec model will understand that "dry" and

"rain" have the same relationship as "hot" and "cold", namely the relationship between weather and temperature.[7]

Word2Vec uses a neural network to derive vectors. There are two types of architecture from word2vec: Continuous Bag of Words (CBOW) and skip-gram.[13][14][16]

2.5. Long Short Term Memory

Long Short-Term Memory (LSTM) is a Recurrent Neural Network (RNN) architecture. LSTM can handle long- term dependencies, gradient disappearing or gradient exploding. In the figure, you can see the architecture of the LSTM as follows.

Figures 2. LSTM Architecture[11]

The LSTM consists of several gates, as seen in the picture. The following is an explanation of the gate, along with how the LSTM works

Figures 3. Forget Gate Architecture[11]

a. Forget Gate

As seen in the figure, after getting the output from the previous state from the state ℎ forget gate ( ̂ ), it decides how much information 𝑡−1 𝑓 𝑡 is removed and produces an output in the form of a zero value, indicating that information will be discarded and one indicates information will be continued. The forget gate value (𝑂𝑓) can be calculated with the following equation:

ft= σ(Wf [ht−1, xt] + bi) (1)

(4)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 504 Note: the notation [ht−1, xt] is a concatenation operation; meaning we add the row from xt with the row from ht

b. Input Gate

The input Gate is the second gate in the LSTM, which is useful for determining the information stored in the cell state. This gate consists of a sigmoid layer and a tanh layer. The sigmoid layer decides which values to update, as described in equation (2). The tanh layer creates a new value C̃.[4][11]

Figures 4. Input Gate Architecture[11]

ct= ft∗ ct−1+ it∗ ćt (2)

Information :

ct : cell state at any given time

ćt : represents the candidate from the cell state ft : forget gate

it : input gate c. Output Gate

The output gate will decide the information to be issued. This output will be based on the value in our context and passed to a filter. First, we run a sigmoid gate called the output gate (Ot) to decide what parts of the context we will output. Then, we pass the context through tanh to make its value between −1 and 1, and we multiply it by the output of the sigmoid gate so that we only output the part we decided on.[4][11] The equation used is as follows:

Ot= σ(Wo [St−1, xt] + bo) (3)

St= Ot∗ tanh(Ct) (4)

Information : Ot : Output gate σ : sigmoid function

2.6 Bidirectional Long Short Term Memory

At this stage, the data will be classified using the BiLSTM method. BiLSTM is the development of a variant of the LSTM model whose processes are reversed (forward and backward).[8][10][11] These two layers (forward and backward) reading incoming information from two directions at once, the output is generally combined into one. Each word in the document is processed sequentially so that the user tweets can be understood each word sequentially. The BiLSTM method will be beneficial because this method can have information from past and future conditions to make this method more accurate. [14][15]

Figure 5. BiLSTM Architecture[11]

(5)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 505 2.6 Confusion Matrix

The confusion matrix is one of the performance measurement media and the classification results of a two- dimensional matrix. The matrix is divided into two positive and negative labels, supplemented by four combinations. These combinations are True Positive, True Negative, False Positive, and False Negative[2][6][7], which are illustrated in the table below

Table 1. Confusion Matrix Actual Class Predicted Class

Positive Negative

Positive TP FN

Negative FP TN

Information:

TP (true positive): shows the original value and positive predictions

FP (false positive): indicates an initial negative value, but the predicted result is positive FN(false negative): indicates an original positive value, but the expected result is negative TN(true negative): shows the original and predicted values are negative

The way to see and measure the performance of the results of the classification that is done can be through the following calculations:

a. Accuracy, model accuracy in predicting. Here is the equation,

Accuracy = (TP + TN) (TP + TN + FP + FN ) (5)

b. Precision is the model's accuracy in predicting the right things but only the positive ones. Here are the similarities

Precision = TP / (TP+FP) (6)

c. Recall model accuracy in predicting positive. Here is the equation,

Recall = TP / (TP+FN) (7)

d. F1-Score, comparison of average precision and recall. The following is the equation, recall model accuracy in predicting positive. Here is the equation,

F1-Score = 2 * (precision*recall) / (recall+precision) (8)

3. RESULT AND DISCUSSION

Before the analysis is carried out, there needs to be staged, as explained in the previous discussion. At the data cleaning stage, the researcher examines the data based on the relevance and similarity of the data for each dataset.

This check is carried out based on the data's relevance to the topic of Natural Disasters. While data that has similarities will be discarded, and the initial data will be taken, the full details can be seen in Table 2

Table 2. Data Preparation Process

Phase Hasil

Original Tweets

text: "SIBAT @palangmerah Desa Trucuk Bojonegoro lakukan pengecekan berkala dan perawatan perahu persiapan untuk mengantisipasi bencana banjir sungai Bengawan Solo dimana @infoBMKG memperkirakan curah hujan masih terjadi sd bulan Maret 2020 https://t.co/tv45BaVEFd"

Case Folding

text sibat palangmerah desa trucuk bojonegoro lakukan pengecekan berkala dan perawatan perahu persiapan untuk mengantisipasi bencana banjir sungai bengawan solo dimana infobmkg memperkirakan curah hujan masih terjadi sd bulan maret

Stopword Removal

palang merah desa trucuk bojonegoro melakukan pengecekan berkala dan perawatan perahu persiapan untuk mengantisipasi bencana banjir sungai bengawan solo dimana infobmkg memperkirakan curah hujan masih terjadi hingga bulan maret

Stemming

palang merah desa trucuk bojonegoro melakukan pengecekan berkala dan perawatan perahu persiapan untuk mengantisipasi bencana banjir sungai bengawan solo dimana bmkg memperkirakan curah hujan masih terjadi hingga bulan maret

Tokenization

‘palang’, ‘merah’, ‘desa’, ‘trucuk’, ‘bojonegoro’, ‘lakukan’, ‘pengecekan’, ‘berkala’, ‘dan’,

‘perawatan’, ‘perahu’, ‘persiapan’, ‘untuk’, ‘mengantisipasi’, ‘bencana’, ‘banjir’, ‘sungai‘,

‘bengawan’, ‘solo’, ‘bmkg’, ‘memperkirakan’ , ‘curah’, ‘hujan’, ‘masih’, ‘terjadi’, ‘hingga’,

‘bulan’, ‘maret’

(6)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 506 3.1 Test result

The data used in this research are 2686 tweets equipped with sentiment labels. The results of the 80:20 comparison get the highest accuracy which can be seen in the table below

Table 3. Comparison with the highest accuracy Data Split Precision Recall F1-Score Accuracy

80:20:00 0.88 0.92 0.90 0.84

Testing was carried out using the split train and test data method, with the initial model training_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']). Scenario tests were also carried out to ensure optimal accuracy value for the data. The scenario test was carried out by splitting the data into the train data and test data, starting from 60:40 to 90:10. Scenario testing also depends on the batch_size parameter, namely 128 and 256.

Tabel 4. Experiment with changing neurons and batch size

Scenario First Parameter Accuracy Second Parameter Accuracy (data train :

data test)

score = training_model.evaluate(x_test, y_test, batch_size=128)

score = training_model.evaluate(x_test, y_test, batch_size=256)

60:40:00 81.63% 78.64%

70:30:00 83.74% 83.12%

80:20:00 84.67% 84.16%

90:10:00 82.89% 81.78%

In the first scenario, the 80: 20 test uses the parameter sequence_length = 300, calculated at x_test = pad_sequences(tokenizer.texts_to_sequences(data_test.text), maxlen=SEQUENCE_LENGTH) and batch_size = 128, resulting in an optimum accuracy of 84%. When doing the second scenario by increasing the batch_size value in the model, the accuracy results from a decrease, so it becomes an anomaly in the data.

Tabel 5. Validation Results from Epoch testing

Accuracy

Epoch (history = training_model.fit( x_train, y_train,batch_size=128,epochs=10, validation_split=0.1,callbacks=callbacks,verbose=1)

01-Oct 0.7977

02-Oct 0.8257

03-Oct 0.8148

04-Oct 0.8376

05-Oct 0.8495

06-Oct 0.8427

07-Oct 0.8639

08-Oct 0.8707

09-Oct 0.8789

10-Oct 0.8743

In BiLSTM, a test scenario is carried out with a ratio of 80:20. The highest accuracy value obtained on the 9th epoch test results is 87.89%. The test parameters in this scenario are batch_size = 128, sequence_length = 300, and validation_split = 0.1.

Tabel 6. Validation Results from Epoch testing

Example of a predicted set of tweets Prediction results and classification of tweets predict("sedia tanaman hujan perilaku menjaga bencana text: RT @infoBMKG:

Peringatan dini cuaca wilayah Lampung [10 Februari 2020]

#BMKG\nSelengkapnya klik tautan berikut https://t.co/CTsyP7g6JF Puluhan rumah terendam banjir setelah diterjang hujan selama 7 jam. Banjir yang mendera Gunung Pandan Madiun ini terjadi sejak Senin sore setinggi 70 cm. #Banjir2020

#Madiun

1/1

[================

==============] - 1s 1s/step

https://news.detik.com/berita-jawa-timur/d-4894211/bendungan-kedungbrubus- madiun-meluap-puluhan-rumah-terendam-

banjir?utm_content=detikcom&utm_term=echobox&utm_medium=oa&utm_cam paign=detikcomsocmed&utm_source=Twitter#Echobox=1581390177 …

Anies Mengeluh Kerap Disalahkan Jika Jakarta Kebanjiran http://dlvr.it/R8xjFZÂ

{'label': 'POSITIVE',

(7)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 507 Example of a predicted set of tweets Prediction results and

classification of tweets

#pantaucom #aniesbaswedan #banjirjakarta #dkijakarta Digebrak dong kasurnya biar kek gempa bumi Kalo aku mah waktu gempa... ketiban cor coran rumah tapi g ada lecet juga cuma badan putih semua kena abu reruntuhan rumah.. (:")

'score':

0.9990085959434509, 'elapsed_time':

1.3573253154754639}

predict("siaga hadapi bencana ikutan crowdsourcing info selamatkan jiwa ajak teman bmkg resmi peringatan potensi tsunami akibat gempa satupun gempa meneriakkan harga mati gunung merapi mengalami erupsi freatik kamis pagi sekian aktivitas signifik pemkot surabaya tumpukan sampah pertokoan penyebab banjir surabaya didukung javaonline situs togel online terpercaya indonesia")

1/1

[================

==============] - 0s 59ms/step

{'label': 'NEGATIVE', 'score':

0.0968855544924736, 'elapsed_time':

0.0913233757019043}

3.2 Discussion

After this system was implemented on Twitter data with a ratio of 80:20 on the train data and test data tested with epoch = 10, the performance was different. The use of embedded_size in the BiLSTM layer and the validation_split number of 0.1 caused a significant performance difference. In table 2, the best results are obtained: classification using BiLSTM with batch_size = 128 with embedded_size = 300, resulting in 84% accuracy, 88% precision, 92%

recall, and 90% F1-Score. Research using the BiLSTM method provides better performance than SVM, which only produces an accuracy of 82.73% and a BERT of 81.78%. The results of the BiLSTM classification went well on the 9th epoch, and it can be seen that the more excellent the epoch value, the better the performance.

4. CONCLUSION

The application of word2vec in the preprocessing stage makes it very easy to find the type of disaster and things that are often discussed related to this disaster. This approach process is supported by the use of the BiLSTM method, where from a total of 2686 data collected, there were around 2081 positive responses and 605 negative responses. Optimal accuracy results are obtained in the calculation of 80:20 with batch_size = 128 and embedding_layer = 300, which is 84% with 88% precision, 92% recall, and 90% f1-score. Using word2vec allows the identification of patterns and relationships in words used in disaster-related texts, increasing sentiment analysis's accuracy. In addition, the use of the BiLSTM method, in combination with the embedding_layer and batch_size factors, allows the model to effectively capture context and sentiment in text, leading to a more accurate classification of sentiments.

REFERENCES

[1] Sulthan, M. B., Wahyudi, I., & Suhartini, L. (2021). Analisis Sentimen Pada Bencana Alam Menggunakan Deep Neural Network dan Information Gain. Jurnal Aplikasi Teknologi Informasi Dan Manajemen (JATIM).

https://doi.org/10.31102/jatim.v2i2.1273

[2] D. Superadmin, Badan Nasional Penanggulangan Bencana, [online] Available: https://bnpb.go.id/infografis/kejadian- bencana-tahun-2021.

[3] Digital 2022: Indonesia — DataReportal – Global Digital Insights, “DataReportal – Global Digital Insights,”

DataReportal – Global Digital Insights, Feb. 15, 2022. https://datareportal.com/reports/digital-2022-indonesia (accessed May 15, 2022).

[4] Pratiwi, T. S., & Chotimah, H. C. (2021). AKTIVITAS DIPLOMASI DIGITAL DALAM MANAJEMEN BENCANA:

STUDI KASUS DI DAERAH ISTIMEWA YOGYAKARTA, INDONESIA DAN FUKUSHIMA, JEPANG. Jurnal Studi Diplomasi Dan Keamanan. https://doi.org/10.31315/jsdk.v13i1.4367

[5] Statista. “Pengguna Twitter Indonesia Masuk Daftar Terbanyak di Dunia, Urutan Berapa?” katadata, 22 Maret 2022, https://databoks.katadata.co.id/datapublish/2022/03/23/pengguna-twitter-indonesia-masuk-daftar-terbanyak-di-dunia- urutan-berapa. Accessed 20 Mei 2022.

[6] Auliya Rahman Isnain, Agus Sihabuddin, and Yohanes Suyant, “Bidirectional Long Short Term Memory Method and Word2vec Extraction Approach for Hate Speech Detection,” IJCCS (Indonesian Journal of Computing and Cybernetics Systems), vol. 14, no. No.2, pp. 1–10, Apr. 2020.

[7] Alif Sabrani, “KLASIFIKASI ARTIKEL ONLINE TENTANG GEMPA DI INDONESIA MENGGUNAKAN MULTINOMIAL NAÏVE BAYES,” Publikasi Tugas Akhir S-1 PSTI FT-UNRAM, 2020.

https://begawe.unram.ac.id/index.php/ta/article/view/20 (accessed May 14, 2022)

(8)

Rachdian Habi Yahya, Copyright © 2023, MIB, Page 508 [8] Pratiwi, R. W., Sari, Y., & Suyanto, Y. (2020). Attention-Based BiLSTM for Negation Handling in Sentimen Analysis.

IJCCS (Indonesian Journal of Computing and Cybernetics Systems). https://doi.org/10.22146/ijccs.60733

[9] Dong, Y., Fu, Y., Wang, L., Chen, Y., Dong, Y., & Li, J. (2020). A sentiment analysis method of capsule network based on BiLSTM. IEEE Access. https://doi.org/10.1109/ACCESS.2020.2973711

[10] Sulthan, M. B., Wahyudi, I., & Suhartini, L. (2021). Analisis Sentimen Pada Bencana Alam Menggunakan Deep Neural Network dan Information Gain. Jurnal Aplikasi Teknologi Informasi Dan Manajemen (JATIM).

https://doi.org/10.31102/jatim.v2i2.1273

[11] Fauzi, R. (2021, February 21). Cara Kerja Long Short-Term Memory (LSTM) | Catatan Penelitian #11. Rifqifai.

Retrieved January 21, 2023, from https://rifqifai.com/cara-kerja-long-short-term-memory-lstm/

[12] Dr. G. S. N. Murthy, Shanmukha Rao Allu, Bhargavi Andhavarapu, & Mounika Bagadi, Mounika Belusonti. (2020).

Text based Sentiment Analysis using LSTM. International Journal of Engineering Research And.

https://doi.org/10.17577/ijertv9is050290

[13] Que, V. K. S., Iriani, A., & Purnomo, H. D. (2020). Analisis Sentimen Transportasi Online Menggunakan Support Vector Machine Berbasis Particle Swarm Optimization. Jurnal Nasional Teknik Elektro Dan Teknologi Informasi.

https://doi.org/10.22146/jnteti.v9i2.102

[14] T. S. Sabrila, V. R. Sari, and A. E. Minarno, “Analisis Sentimen Pada Tweet Tentang Penanganan Covid-19 Menggunakan Word Embedding Pada Algoritma Support Vector Machine Dan K-Nearest Neighbor,” Fountain of Informatics Journal, vol. 6, no. 2, p. 69, Jul. 2021, doi: 10.21111/fij.v6i2.5536.

[15] Soni, Manik. “Understanding architecture of LSTM cell from scratch with code.” Medium.com, 22 Juni 2018, https://medium.com/hackernoon/understanding-architecture-of-lstm-cell-fr om-scratch-with-code-8da40f0b71f4.

Accessed 15 Mei 2022.

[16] Elfaik, H., & Nfaoui, E. H. (2021). Deep Bidirectional LSTM Network Learning-Based Sentiment Analysis for Arabic Text. Journal of Intelligent Systems. https://doi.org/10.1515/jisys-2020-0021

[17] Rhanoui, M., Mikram, M., Yousfi, S., & Barzali, S. (2019). A CNN-BiLSTM Model for Document-Level Sentiment Analysis. Machine Learning and Knowledge Extraction. https://doi.org/10.3390/make1030048

[18] Imrana, Y., Xiang, Y., Ali, L., & Abdul-Rauf, Z. (2021). A bidirectional LSTM deep learning approach for intrusion detection. Expert Systems with Applications. https://doi.org/10.1016/j.eswa.2021.115524

[19] Elfaik, H., & Nfaoui, E. H. (2021). Deep Bidirectional LSTM Network Learning-Based Sentiment Analysis for Arabic Text. Journal of Intelligent Systems. https://doi.org/10.1515/jisys-2020-002

[20] Han, X., & Wang, J. (2019). Using social media to mine and analyze public sentiment during a disaster: A case study of the 2018 Shouguang city flood in China. In ISPRS International Journal of Geo-Information.

https://doi.org/10.3390/ijgi8040185

[21] Behl, S., Rao, A., Aggarwal, S., Chadha, S., & Pannu, H. S. (2021). Twitter for disaster relief through sentiment analysis for COVID-19 and natural hazard crises. International Journal of Disaster Risk Reduction.

https://doi.org/10.1016/j.ijdrr.2021.102101

[22] Pirnau, M. (2019). Sentiment analysis for the tweets that contain the word “earthquake.” Proceedings of the 10th International Conference on Electronics, Computers and Artificial Intelligence, ECAI 2018.

https://doi.org/10.1109/ECAI.2018.8678958

[23] Irawanto, B. (2018). Narratives of natural disaster survivors in Indonesian media. Pacific Journalism Review.

https://doi.org/10.24135/pjr.v24i1.410

[24] Zaki, U. H. H., Ibrahim, R., Halim, S. A., Khaidzir, K. A. M., & Yokoi, T. (2018). Sentiflood: Process model for flood disaster sentiment analysis. 2017 IEEE Conference on Big Data and Analytics, ICBDA 2017.

https://doi.org/10.1109/ICBDAA.2017.8284104

[25] Choirul Rahmadan, M., Nizar Hidayanto, A., Swadani Ekasari, D., Purwandari, B., & Theresiawati. (2020). Sentiment Analysis and Topic Modelling Using the LDA Method related to the Flood Disaster in Jakarta on Twitter. Proceedings - 2nd International Conference on Informatics, Multimedia, Cyber, and Information System, ICIMCIS 2020.

https://doi.org/10.1109/ICIMCIS51567.2020.9354320

Referensi

Dokumen terkait

PENERAPAN PENDEKATAN REALISTIC MATHEMATIC EDUCATION UNTUK MENINGKATKAN KEMAMPUAN PEMAHAMAN MATEMATIS SISWA SEKOLAH DASAR.. Universitas Pendidikan Indonesia |

ISTANA PERSAHABATAN TIM UR.. Alamat

Tahun 2009 tentang Bantuan Keuangan kepada Partai Politik perlu diubah. agar sesuai dengan kebutuhan hukum dan

Berdasarkan penelitian yang telah dilakukan oleh peneliti, hasil menunjukkan bahwa variabel price, place, promotion, people, dan process memiliki pengaruh yang

Pada hari ini Jumat tanggal Empat Belas bulan Juni tahun Dua Ribu Tiga Belas, kami yang bertanda tangan dibawah ini Panitia Pengadaan Barang dan Jasa Di

Pada hari ini Senin tanggal Dua bulan November tahun Dua Ribu Lima Belas, kami Pokja Pengadaan Barang/Jasa pada Unit Layanan Pengadaan Barang/JasaKabupaten Aceh Barat Daya yang

Hasil dari penelitian ini menunjukkan bahwa variabel Total Asset Turnover, Fixed Asset Turnover, Inventory Turnover, Current Ratio, Debt to Asset Ratio, dan Debt to

Peta hazard semua sumber gempa dengan SA batuan dasar T = 0,2 sekon untuk periode ulang 2500 tahun.