• Tidak ada hasil yang ditemukan

RENCANA TINDAK LANJUT PENELITIAN

Rencana selanjutnya dari penelitian ini adalah:

1. Mengembangkan model matematis permasalahan Traveling Thief Problem (TTP ) untuk

menyelesaikan permasalahan pada manajemen pengangkutan sampah berdasarkan data

yang didapatkan sekaligus mengembangkan algoritma hyper-heuristics untuk

menyelesaikan model baru tersebut.

2. Memodelkan permasalahan penjadwalan public transport ke dalam bentuk Urban Transit

Route Problem (UTRP )

3. Mengimpplemtasi algoritma Modified Particle Swarm Optimization (MPSO) ke dalam

kerangka kerja hyper-heuristik

4. Menyelesaiakan permasalahan UTRP dengan pendekatakan hyper-heuristik dan algoritma

MPSO

5. Membandingkan peforma yang dihasilkan pendekatan hyper-heuristik dan algoritma

MPSO dengan metode lainnya dalam menyelesaikan permasalahan penjadwalan public

transport

BAB VI DAFTAR PUSTAKA

[1] Kruskal, Joseph B. "On the shortest spanning subtree of a graph and the traveling salesman

problem." Proceedings of the American Mathematical society 7.1 (1956): 48-50.

[2] E. K. Burke and Y. Bykov, "The late acceptance Hill-Climbing heuristic," European Journal of

Operational Research, no. 258, pp. 70-78, 2017.

[3] J. M. Sussman, Perspectives on Intelligent Transportation Systems (ITS). New York:

Springer-Verlag, 2005.

[4] D. L. Applegate, R. E. Bixby, V. Chvatal and W. J. Cook, "The Problem," in The Traveling

Salesman Problem : A Computational Study, New Jersey, Princeton University Press, 2006,

pp. 1-59.

[5] Edmund Burke, Graham Kendall, Jim Newall, Emma Hart, Peter Ross, and Sonia

Schulenburg. Hyper-heuristics: An emerging direction in modern search technology. In

Handbook of metaheuristics, pages 457–474. Springer, 2003.

[6] G. Dantzig and J. Ramser, “The Truck Dispatching Problem,” Management Science, Vol. 6,

No. 1, 1959, pp. 80-91.

[7] Braekers, Kris, Katrien Ramaekers, and Inneke Van Nieuwenhuyse. "The vehicle routing

problem: State of the art classification and review." Computers & Industrial Engineering 99

(2016): 300-313.

[8] Zhang, Junping, et al. "Data-driven intelligent transportation systems: A survey." IEEE

Transactions on Intelligent Transportation Systems 12.4 (2011): 1624-1639.

[9] Laporte, Gilbert. "A concise guide to the traveling salesman problem." Journal of the

Operational Research Society 61.1 (2010): 35-40.

[10] Lust, Thibaut, and Jacques Teghem. "The multiobjective traveling salesman problem: a

survey and a new approach." Advances in Multi-Objective Nature Inspired Computing.

Springer, Berlin, Heidelberg, 2010. 119-141.

[11] Doerner, Karl F., and Verena Schmid. "Survey: matheuristics for rich vehicle routing

problems." International Workshop on Hybrid Metaheuristics. Springer, Berlin, Heidelberg,

2010.

[12] Lin, Canhong, et al. "Survey of green vehicle routing problem: past and future trends."

Expert Systems with Applications 41.4 (2014): 1118-1138.

[13] Zhao, Na, Jiabin Yuan, and Han Xu. "Survey on intelligent transportation system." Computer

science 41.11 (2014): 7-11.

LAMPIRAN 1 Tabel Daftar Luaran

Program : Penelitian Unggulan ITS

Nama Ketua Tim : Ahmad Muklason, S.Kom., M.Sc., Ph.D

Judul : Pengembangan Model Traveling Salesman Problem &

Vehicle Routing Problem dan Algoritma Generik Berbasis

Hyper-heuristics Untuk Menyelesaikan Permasalahan

Optimasi Operasi dan Penjadwalan Public Transport di Kota

Surabaya dalam Kerangka Kerja Intelligent Transport

Systems

1.Artikel Jurnal

No Judul Artikel Nama Jurnal Status Kemajuan*)

1 Solving Travelling Salesman

Challenge 2.0 Problem

with Artificial Bee Colony

Algorithm

Expert System With Its

Application

Persiapan

*) Status kemajuan: Persiapan, submitted, under review, accepted, published

2. Artikel Konferensi

No Judul Artikel Nama Konferensi (Nama

Penyelenggara, Tempat,

Tanggal)

Status Kemajuan*)

1 Self Adaptive Learning – Great

Deluge Based Hyper-heuristics

for Solving Cross Optimization

Problem Domains

IEEE: The 17th

International

Conference on

Electrical

Engineering/Electronics,

Computer,

Telecommunications

and Information

Technology (ECTI-CON

2020) 24-27 June 2020

Virtual Conference

Hosted by College of

Computing, Prince of

Songkla University,

Published

*) Status kemajuan: Persiapan, submitted, under review, accepted, presented

3. Paten

No Judul Usulan Paten Status Kemajuan

*) Status kemajuan: Persiapan, submitted, under review

4. Buku

No Judul Buku (Rencana) Penerbit Status Kemajuan*)

*) Status kemajuan: Persiapan, under review, published

5. Hasil Lain

No Nama Output Detail Output Status Kemajuan*)

*) Status kemajuan: cantumkan status kemajuan sesuai kondisi saat ini

4. Disertasi/Tesis/Tugas Akhir/PKM yang dihasilkan

No Nama Mahasiswa NRP Judul Status*)

1 SHOF RIJAL AHLAN

ROBBANI

05211850012002

Hyper-heuristic

for Solving

New variant

TSP Problem

In Progress

2 LANANG ALUN

NUGRAHA

05211850012003 Hyper-heuristic

for Solving

New variant

VRP Problem

In Progress

“Self Adaptive Learning - Great Deluge Based Hyper-heuristics for Solving Cross Opti- mization

Problem Domains”

Self Adaptive Learning – Great Deluge Based

Hyper-heuristics for Solving Cross Optimization

Problem Domains

Widya Saputra

Information Systems Department Institut Teknologi Sepuluh Nopember

Surabaya, Indonesia

widyasaputra@engineer.com

Ahmad Muklason

Information Systems Department Institut Teknologi Sepuluh Nopember

Surabaya, Indonesia

mukhlason@is.its.ac.id

Baiq Z.H. Rozaliya

Information Systems Department Institut Teknologi Sepuluh Nopember

Surabaya, Indonesia

zuyyinahilya56@gmail.com

Abstract—In the literature, almost all optimization problems in NP-hard class are solved by meta-heuristics approach. However, this approach has the drawback of requiring tuning parameters for each different problem domain and different instances of the same problem. This approach is considered less effective in resolving these problems. Therefore, a new approach is needed, namely the hyper-heuristics approach that is able to solve cross-domain problems. Hyper-heuristic is one of the approximate search methods which is able to provide solutions to NP-hard problems in polynomial time, as well as giving fairly good and acceptable results. This method has two properties of search space, namely the selection of LLH and the acceptance of solutions (move acceptance). This approach works in barrier domains rather than directly working in problem domains. With these properties, hyper-heuristic is able to solve problems in different domains. In addition, hyper-heuristics has a learning mechanism through feedback from previously generated solutions. This final project tries to apply a hyper-heuristic algorithm in six combinatorial optimization problem domains, namely SAT, Bin Packing, Flow Shop, Personnel Scheduling, TSP, and VRP. The method that will be used in this final project is Self Adaptive - Great Deluge (SAD-GED). The Self Adaptive mechanism is used to make LLH selection to be used, while the Great Deluge is used in determining the acceptance of solutions (move acceptance) in a hyper-heuristic framework. The application of the SAD-GED algorithm is expected to be able to provide better results than the existing algorithm used previously, namely Simple Random - Simulated Annealing.

Keyword—Meta-heuristic, Hyper-heuristic, Self-adaptive Learning, Great Deluge, Cross Domain Optimization

I. INTRODUCTION

Optimization is a method of finding feasible and optimal solutions from a collection of solutions that have been identified [1]. Optimization plays a role in minimizing or maximizing the value of the objective function in each problem. There are various optimization problems such as sat, flow shop, timetabling, vehicle routing problem, bin packing, and traveling salesman problem where trying to find the shortest distance, from one location to another [2]. These problems can be included in the NP-hard class where optimal solutions are difficult to obtain because of the complexity of the problem.

In solving increasingly complex problems, we need algorithms that are able to provide solutions with relatively fast time. Approximate algorithms such as heuristics, meta-heuristics, and hyper-heuristics are choices in solving these problems. The approximate algorithm provides a solution that does not guarantee the most optimal, but is quite good and relatively fast (polynomial).

Meta-heuristic is one method that is able to select and modify heuristics to produce new solutions or change current solutions into other solutions [3]. For many combinatorial problems, this method becomes very powerful and provides a flexible method. However, this method has drawback, unable to adapt well to the changes in the structure of problems or even instance of problems that are different from the same structure.

Unlike Hyper-heuristics, this method is a high-level methodology which combine multiple low-level heuristics (LLH) and problem instances effectively so it provides solutions in cross-domain problems. In other words, this method can determine which low level heuristic will be used and determine whether to accept the solution produced by LLH (move acceptance) or not. This method works in a heuristic workspace so there is no need to know a specific understanding of the problem to be solved. So that, hyper-heuristic is more general in solving hard combinatorial optimization problems because it does not depend on the problem parameters [4]. This study tries to apply the Hyper-heuristic Self Adaptive Learning Great Deluge (SADGED) method in solving cross domain optimization problems. The problem to be solved refers to the HyFlex framework where there are six problem domains, satisfiability (SAT), one dimensional bin packing, permutation flow shop, personnel scheduling, traveling salesman problem (TSP), and vehicle routing problem (VRP). Later, the results obtained from the method will be compared with the Simple Random Simulated Annealing (SRSA) algorithm which acts as a comparison method so that it can measure the performance of the algorithm that has been applied

II. LITERATURE STUDY

A. Combinatorial Optimization

The problem of combinatorial optimization is a problem that exists in the fields of machinery, planning, and industry that can be modeled in the form of minimizing or maximizing costs on limited discrete variables. [5]. In the optimization problem, there is a value of the objective function which will be maximized and minimized according to the objectives to be achieved based on the existing constraints.

B. Meta-heuristic

Meta-heuristics are the main strategies that guide and modify other heuristics to produce solutions outside of optimal local search. This method describes the entire search process, such as which heuristics will be used, even the criteria for accepting solutions. For many combinatorial problems, this method becomes very powerful and provides a flexible method. Meta-heuristics are mostly inspired by natural processes or science, such as the Simulated Annealing method, Taboo Search, Genetic Algorithm, and so on [3].

Meta-heuristics will succeed in optimizing the problem if it can strike a balance between exploration (diversification) and exploitation (intensification) so that it depends on parameter values [6]. Exploitation is needed to identify parts of the search for solutions with good quality results. The classification of solutions resulting from this process can be either single or plural solutions. This approach relies on parameter values so that it is less able to adapt to changes in problem structures or even different problem instances with the same structure.

C. Hyper-heuristic

Hyper-heuristic includes a collection of approaches which aim to automate, which usually combines with machine learning techniques, the process involves selecting and combining simple heuristics or creating new heuristics from existing heuristic components in order to solve optimization problems [6]. Hyper-heuristic is a methodology that can provide a solution that is not too optimal, but a fairly good and acceptable solution. the main purpose of hyper-heuristics is to create a general design method, which can provide a feasible solution based on the use of LLH.

Hyper-heuristic is a learning algorithm if it uses feedback from the solution search process. Based on the feedback dimension, there are 3 divisions of learning types. Online learning, learning is done when the algorithm is solving

it has LLH selection mechanism (to determine new solutions) and move acceptance (determine whether to accept the solution or not).

III. METHOD

A. Problem Identification

This study aims to apply a hyper-heuristic method that is able to provide optimal results (fitness) for each problem domain contained in the HyFlex framework. Each problem contained in HyFlex has different problem characteristics so that the LLH

development focuses on selection of pertubative LLH based on single point search (one solution result) [8].

B. Literature Study

At this stage a literature study is carried out related to the material that will be used as a research reference. Literature studies include concepts that will be applied in research.

C. Desain The Algorithm

At this stage the hyper-heuristic is developed as a high-level heuristic strategy that is in accordance with the problem that has been defined. At this stage the method will be described in the selection of LLH and the mechanism for accepting solutions.

In this study, the high level strategy applied to hyper-heuristics is a combination of self adaptive learning and great deluge (SADGED). The self adaptive learning method is used as LLH selection in solving problems, while the great deluge is used as a mechanism of move acceptance in obtaining new solutions resulting from the implementation of LLH in each problem domain. This mechanism will accept a better solution or below the parameter B value limit in the great deluge method at each predetermined iteration. SADGED algorithm design can be seen in Figure 1.

In this study, the LLH series (low level heuristic) used is LLH which is available in the HyFlex framework. low level heuristic is a collection of heuristics on HyFlex which is used to generate solutions so that the objective function of the solution can be known in solving problems. In HyFlex, LLH used are grouped into four types, namely mutation, ruin-recreate, local search, and crossover [9].

Fig. 2. The calculation of minimum and median value that is better than SRSA based on the trial results of the percentage of desired value based on the initial solution

The self adaptive learning method plays a role in determining the amount of LLH that will be used by the method to find the value of objective functions, this study uses the number of LLH limits as provided in the HyFlex framework. The value of the desired value variable is set to 10 percent of the initial solution value. The initial level parameter value or the solution acceptance factor is set the same as the initial solution value, while the value of the decay rate or temperature reducing factor is set based on the reduction in the initial value of the solution with the desired value, then divided by the number of iterations

[10]. Self adaptive learning plays a role in selecting LLH which produces the best value. When all the selected LLHs have been used, the algorithm will fill in the LLH that will be used with a composition of 75% of LLH that produces values that can be accepted by the move acceptance method, while the remaining 25% of LLH is in the problem domain [11].

D. Implementation

This stage is the implementation phase of the algorithm design in the hyper-heuristic Flexible (HyFlex) framework. Implementation is the stage of development of algorithm design into the program language. This stage starts from the preparation of tools to the implementation of the program.

The implementation is done on a 3.2 Ghz core i5 processor and 4096 MB of memory. The algorithm design will be implemented in the HyFlex framework using the NetBeans IDE 8.2 application. implementation is done by calling the method contained in the chesc.jar library

.

E. Trial

Algorithm testing tries to find the optimal solution for the six problem domains in the HyFlex framework. A trial was conducted to find out how the performance of the algorithm that had been built on six problem domains, namely satisfaction, one-dimensional bin packing, permutation flow shop, personnel scheduling, traveling salesman problem, and vehicle routing problem.

Fig. 3. The scenario of testing methods that will be used in the HyFlex framework.

Based on the results of testing the desired value parameters in the self-adaptive learning great deluge method, the most optimal value obtained is 10 percent of the value of the initial solution. The trial is conducted by comparing the median and minimum values of the results of the execution. From Figure 2, the desired value of 10 percent provides a fairly stable number of solutions compared to other trial values. The value obtained is a percentage of the initial solution value. Meanwhile, the use of LLH length in the SADGED method is adjusted to the number of LLH contained in each problem domain.

In Figure 3 the method trial scenario, the algorithm is run by entering some required input data. Furthermore, the framework will search for solutions based on predefined input data. After the final criteria for running the algorithm are completed, the framework will return the value of the best solution resulting from the search process

Simple Random Simulated Annealing Algorithm and Great Deluge Self Adaptive Learning are applied in six problem domains contained in the HyFlex framework. In each problem domain, there are five instances that have been determined in the trial process so that the total instances used are 30 instances. Each instance is tested 31 times with a time of 60000 milliseconds. From the data the results of the execution are calculated the best value (minimum), first quartile, median, third quartile, maximum value, and average that will be used as a comparison value for each method

IV. RESULTS AND DISCUSSION

The trials were conducted 31 times on six problem domains with 5 instances of each problem. In conducting trials, performance measurements are carried out by comparing the SADGED method with SRSA. Performance testing is done by doing three comparisons. First, comparing the distribution of data from the results of the method execution on some statistical data, namely the minimum value (fitness), the first quartile, median, third quartile, and the maximum value of the value of the objective function. This test uses the calculation of values and points and visualization of boxplot diagrams from the statistical data obtained. Second, comparing the median values obtained to measure the concentration of values in the data. Third, make a comparison of the minimum values obtained from the method execution results.

A. Comparison of Data Distribution

Comparison of data distribution is done by calculating based on statistical values such as minimum value, first quartile, median, third quartile, and maximum value and visualization of values against the boxplot diagram. The objective function value data from the comparison of two methods that give smaller values will get one point. The maximum point in one problem domain is 25 while the overall point of the problem domain is 150. Methods that get a greater value are said to be superior to other methods. The results of calculations and boxplot diagrams of the two methods can be seen as follows: • In the SAT problem domain, SADGED has better

performance compared to SRSA. The SADGED method

it gets 12 points, while SRSA gets 13 points. The difference between the two methods is only 1 point and won by SRSA.

• In the personnel scheduling problem domain, SADGED excels in three instances, namely instance 5, instance 10, and instance 11, while losing to the other two. The SADGED algorithm is 5 points ahead of SRSA with 16 points.

• In the tsp problem domain, SADGED excels in all five instances by getting a value of 21, which is 19 points ahead of SRSA who gets a value of 4.

• In the vrp problem domain, SAGDED excels in the five instances tested and scores 25 points.

Based on the calculation of points that have been done, the SADGED method excels in four problem domains and gets 56 points compared to SRSA. The SADGED method gets 103 points from all 30 instances tested compared to SRSA which only gets 47 points.

Fig. 4. The comparison of median score graph of great deluge self adaptive learning method with simple random simulated annealing.

B. Comparison of Median Values

Comparison of the median value is done to measure the centralization of the results of the method execution. This second measurement uses the FIFA ball method by looking at the value of the median in each instance. the algorithm that is considered to be won will be given a value of three, then for a balanced algorithm will get a value of one and the losing algorithm will get an empty value. The method is considered to win if the total value of the changes obtained is positive and is

Dokumen terkait