Dynamic Acceleration Coefficient of Particle Swarm Optimization in Robotics Motion Coordination
Nyiak Tien Tang, Kit Guan Lim, Min Keng Tan, Soo Siang Yang, Kenneth Tze Kin Teo*
Modelling, Simulation & Computing Laboratory, Artificial Intelligence Research Unit Faculty of Engineering, Universiti Malaysia Sabah, Kota Kinabalu, Malaysia.
* Corresponding author: [email protected]5
e-mail: [email protected]; [email protected]; [email protected]; [email protected]
Abstract - This paper focuses on improving an optimization process for swarm robots using Particle Swarm Optimization (PSO) by altering the acceleration coefficient from static to dynamic. In swarm robotic, motion coordination addresses the issue of avoiding a group of robots interfere with each other in a limited workspace, while achieving the global motion objective. PSO is commonly suggested in the literature to optimize path trajectory in robotic field. However, the typical PSO tends to be trapped in local optima.
Therefore, a dynamic acceleration coefficient is proposed to optimize the cognitive and social coefficients of PSO in order to improve its exploration ability in seeking the global optimum solution. With this novel feature, PSO becomes less dependent on the chain of its past experience that it had explored in a certain region within the solution space. The effectiveness of the proposed method is tested on a simulated swarm robotic platform. Results show the proposed PSO with Dynamic Coefficient (DCPSO) is 1.09 seconds and 3.58 seconds faster than the typical PSO under dynamic and extreme conditions respectively.
Keywords - particle swarm optimization (pso), swarm robotics, motion coordination, global convergence.
I. INTRODUCTION
In nature, the behavior of a single particle is different from a group of particles. For example, a single ant does not possess much intelligence, but a colony of ants can construct a complex structure. This phenomenon can be applied in the swarm robotic optimization process. This paper focuses on improving a variant of Particle Swarm Optimization (PSO) by altering its cognitive and social parameters from static to dynamic. In previous researches, approaches to improve global convergence and particle escape mechanism are mostly through algorithm-based computation which requires higher computation resources. Therefore, this work proposes a mathematical-based approach to overcome the heavy computation issue. The effectiveness of the proposed algorithm is tested using the Swarm Robotics Motion Coordination problem [1, 2].
Initially, the studies of motion coordination in robotic focused on solving path trajectory of single robot, which allows the robot to move from a location to another without colliding with any obstacle. However, the recent studies are switched to the swarm robots, where scholars are trying to solve the dynamic motion coordination of a group of robots or known as Kino-Dynamics planning. Due to the nature of high input variables for Kino-Dynamics planning, the determination of optimum solution based on analytical methods require higher computational power [3].
Metaheuristic optimization is suggested in literature to provide a sufficiently good solution to an optimization problem, especially with incomplete information and limited
computation capacity [4, 5]. It is used for models that are dynamic and difficult to define a complete analytical model.
PSO mimics the social behavior of birds flying pattern, which is great in scheduling optimization problem [6]. Ant Colony Optimization (ACO), Genetic Algorithm (GA) and PSO are widely employed to solve many problems faced in swarm robotics [7]. ACO is good in dynamic optimization as it mimics ant’s food gathering process through pheromone trails [8, 9]. GA is suitable to be implemented in discrete and combinatorial problems since it mimics the evolution theory: “survival of the fittest” [10, 11]. In this paper, motion coordination is a scheduling optimization problem, where the schedule of each robot is crucial to the result.
In the literature, PSO is commonly chosen as the algorithm to optimize motion coordination in swarm robotics. According to [12-14], the first step in PSO is to uniformly generate particles throughout the dimension within the optimizing field. The position of robots is one of the optimizing parameters, where each parameter contributes one dimension towards the position of the particle. Then, particles move towards the best position particle, and the optimum point will be obtained. Stopping criterion for PSO is when the global best fitness is higher than a pre-set minimum fitness value. PSO has an excellent performance in static motion coordination, where the solution can be determined quickly. Yet, it was found that in a large search space dimension such as dynamic motion coordination, PSO tends to be easily trapped in local optima and having slow improvement on fitness [15-17].
This paper is organized as follows: Section II reviews the swarm robotics, motion coordination, and PSO optimization technique. Section III explains the framework of swarm robotics and the alteration of acceleration coefficient function in calculating the next iteration velocity of each particle. Section IV shows the simulation model and approach of getting motion kinematics and dynamics. Section V presents the experimental result of swarm motion coordination. Lastly, Section VI concludes the performance of Dynamic Coefficient PSO (DCPSO) algorithm.
II. SWARM ROBOTICS FRAMEWORK
From the review in [18], swarm robotics is inspired by social insects to coordinate and control a system with large quantities of simple robots. Advantage of swarm robotics include the cooperative nature that can achieve a common goal. Overall swarm robotics researches consist of signal interface, control approach, mapping and localization, object transportation and manipulation and motion coordination.
Figure 1 shows the architecture of the swarm robotics framework used in this research.
Figure 1. Architecture of swarm robotics framework.
A. Signal Interface
Signal Interface among robots is important when a certain form of cooperation is required by the specific task.
In the past, there are debates on the type of signal interface level should be used among the swarm robotics [19]. From the past literature reviews, two categories can be distinguished in signal interface which are implicit as indirect and direct signal interfaces [20, 21].
B. Control Approach
Iocchi concluded in [22] that there are two main types of control approaches which are centralized control and distributed control. For centralized control, the system consists of a leader which distributes work to the members among the system. The decision is made by the leader while the member’s only job is to follow the instruction decided by the leader. While in distributed control, the system consists of all fully autonomous robots, where decisions are made
within each robot. This type of control does not require a leader [23, 24].
In 1993, Parker [25] studied the pros and cons of decentralized control. The conclusion showed that for swarm robotics to achieve desired emergent group behavior, the control approach should be balanced between centralized and distributed control.
C. Mapping and Localization
In swarm robotics, it is important to represent the physical environments surrounding the agents. Mapping allows data collected by swarm robots’ on-board sensors to be mapped into spatial models [26]. Besides mapping, it is important to know the real-time location of the agents, where the localization technique is used to keep track of the exact or absolute location of an agent within the generated spatial data. Mapping can be categorized into two main approaches, namely topological mapping and geometric mapping [27].
Topological mapping utilizes the encoding of the surrounding structural characteristics to construct the map
[28, 29]. Normally, topological mapping encodes the data collected by surrounding swarm robotics in vector form, where each point represents a distinctive target [30, 31]. On the other hand, geometric mapping encodes the surrounding swarm robotics with more detailed data, where such data can be used to represent a floor plan of the map.
D. Motion Coordination
In the past two decades, motion coordination or path- planning is one of the hot topics in swarm robotics researches. Most of the researches focused on providing an optimum path for the given robot according to the environment description, where obstacles must be avoided and the robot must be having the highest possible fitness simultaneously [32]. Path-planning is normally categorized into global and local path-planning [33].
In a local path-planning situation, the robot normally plans the path by analyzing data collected through an onboard transducer, where the result from such analysis allows swarm robotics to avoid obstacles. On the other hand, global path-planning required the precisely defined surrounding model, so that a successful plan can be computed based on the model [34].
Path-planning can be divided into a static environment and dynamic environment path-planning. Static environment path-planning workspace comprises of stationary obstacles, where the shape of the obstacles is known and fixed [35].
For dynamic environment path-planning, obstacles can either be stationary or moving. In facts, environment disturbances can also be a mixture of static and dynamic.
III. PSO ON MOTION COORDINATION
The metaheuristic optimization method, PSO was first proposed by [3]. This optimization method is able to handle several optimizing variables in different dimensions. It is based on the social cooperation among each particle, where the particle moves towards new position with a given velocity. The motion of particle is inside the search space, where dimensions are the optimizing parameters, while positions are used to determine the fitness value.
A. Principle of Particle Swarm Optimization
The position of each particle, xi[n] has the information of the optimizing parameter, where each dimension describes one optimizing parameter. The subscript i indicates the ith particle where i ∈ (1, imax). imax is the population number. n is the number of iterations, where in each iteration the position of the particle may change depending on the situation. The particle consists of a velocity, vi[n] which allows the position xi[n] to move throughout the optimizing plane. The movement of particle can be described in (1) [5].
1 1 (1)
Initially, the velocity of each particle is randomly distributed. After first iteration, the velocity of each iteration changes according to personal best position, global best position and the inertia factor. The velocity changes can be expressed in (2) [5].
1
(2) where ω is the inertia coefficient, c1 is the cognitive component acceleration coefficient, c2 is the social component acceleration coefficient, Pi[n] is the personal best position and G[n] is the global best position. In order for the swarm to converge, the inertia coefficient ω, which control the last velocity has to be reduced in each iteration. The damping factor is added into the model to make sure the particles converge towards the global best position. Hence, the inertia coefficient is expressed in (3) [6].
(3)
where n is the number of iterations; ω0 is the initial inertia coefficient and μ is the damping factor. The μ value is normally set in between 0.95 and 0.99.
B. Fitness Function
Fitness of each particle is determined by the fitness function, which is the simulation of robot motion in this research. The goal of this optimization is to determine the shortest distance, at the same time the robot must complete the given task. Hence, the fitness function is defined as shown in (4), (5) and (6).
∆ (4)
∆ (5)
(6) where F(xi[n]) is the simulation which the input is the optimizing parameter, or the position of the particle. Outputs of the simulation can be the total distance travelled by swarm robots dT, the total assigned linear distance ignoring obstacle rT and the distance between respective robot and its assigned task ∆r. The cost index, γ is calculated through (5).
ε1 is the primary cost weightage, ε2 is the secondary cost weightage, which the weightage can be adjusted. The fitness index, σ is just a reciprocal of γη, where η is the accelerate factor of fitness function.
C. Dynamic Acceleration Coefficient Deviations The flaw on conventional PSO is that it tends to be trapped into local optimum if the optimization plane consists of several local optimal points. If one of the particles entered the local optima, all the other particles will easily be attracted towards the wrong searching area. Hence, there must be an escape mechanism for the particle to jump out
from local optima. The solution is to allow social acceleration component to be dynamic. If the global best position, G[n] does not change for a certain amount of iteration, this indicates that the surrounding search area of the global best position consists no better fitness index.
Hence, the other particles should start to search towards another search space.
The social component acceleration coefficient c1 should be a function of iteration, which decreases as the global best position remain unchanged for a certain iteration. Hence, the model of the social component acceleration coefficient should be inversely proportional to number of iterations since last change in global best position, nG. Besides, if the global best position consists of overall low fitness index, the particle should not be attracted toward it. Therefore, the social component acceleration factor should be proportional to the changes in global best fitness, ∆σG.
Besides weakening the attraction of particle towards the global best position, personal best position should be strengthened if several iterations have passed and the global fitness is yet to improve. Hence, the cognitive component acceleration factor, c2 should be proportional to nG. Another benefit on strengthening the cognitive component acceleration factor is to prevent the particle from stop moving due to weaken social component velocity and damped previous velocity. Combining all the consideration above, the social and cognitive acceleration factor are expressed as (7), (8) and (9).
′ (7)
1′ 1 ∆
2
1 (8)
2′
2 (9)
where nG is the damping coefficient after last changed of global best position, γ’ is the acceleration factor of the damping coefficient, c1’and c2’ are the modified social and cognitive component acceleration coefficients respectively.
∆σG is the changed in global best fitness, ρA1 is the weight factor for ∆σG component, ρA2 is the weight factor of nG
component for social component acceleration factor and ρB
is the weight factor of nG for cognitive component acceleration factor. The γ’ directly influences the power of increment on nG, where larger γ’ value will cause c2’ to decrease exponentially in each iteration, while increase c1’ exponentially at the same time. Greater ∆σG indicates that the newly found best position consists of greater chance to increase the global fitness. Hence, increasing ∆σG will increase c1’ and attracts more particles towards the region.
IV. SIMULATION MODELLING
In this section, motion of the single robot within swarm robotics is modelled using black box modelling method.
Unit step will be the input to each motor and output of encoder count will be recorded as the result. The result is then used for modelling using regression method.
The proportional gain function for left wheel is estimated as (10), while proportional gain function for the right wheel is estimated as (11).
0.1534
0.1078 (10)
1.2031
0.1169 (11)
For basic feedback control, the input of the system is shown in (12) where f(v) is respective duty cycle function, vd is desired velocity.
1.2031
0.1169 (12)
where x[n] is the input duty cycle and vout[n] is the feedback velocity determined by the infrared (IR) encoder. Figure 2 shows the relationships between motor speed and duty cycle, whereas Figure 3 illustrates the unit step response of the motor.
Figure 2. Graph of average motor speed with respective duty cycle.
Figure 3. Unit step response of the motor given 10 cm s-1 desired input through basic feedback control.
Due to fluctuation in velocity, angle of trajectory will defer caused by the relative velocity of left and right wheel.
Hence, a position deviation is induced in which the robot moves toward deviated angle of trajectory in average speed for both wheels. As relative speed difference is the main component that induced the angle, the angle deviation model is shown in (13).
∆ 2 sin (13)
where vl[k] and vr[k] are the left wheel speed and right wheel speed at k sample respectively. αd[k] is the angle deviation calculated from (13) at k sample. The position deviation response is calculated and shown in Figure 4.
Figure 4. Position deviation response due to angle deviation induced by speed difference between both wheel.
As shown in Figure 4, vertical position deviated by the robot while moving towards destination is deviated towards right 40 cm when the step input velocity of 10 cm s-1 is applied. This downside can be improved by implementing a better control method.
A simple simulation is computed in order to compare the position deviation on simulated motion and actual motion experiment. Simple moving obstacle is given to test the different positions. The simulated position and obstacle layout are shown in Figure 5. Hence, using the experimental parameters as shown in Figure 5, the simulation will feed the
real time step input to the implementation test. The step input of left motor and right motor of the implemented robot will move in response to the signal provided by the simulation. The speed of each wheel is then recorded by infrared (IR) encoder and position of robot is calculated through (13). Figure 6 shows the differences between simulated and actual motion of robot.
Figure 6. Comparison between simulated actual motion of robot.
Finally, models which have been defined previously are put to the test in this section. The feasibility of the simulation is dependent on the actual motion of the robot and the simulated motion, which can be determined through the likeliness between simulated motion path and actual motion path, as shown in (14).
|1 2|
12 1
22 2
(14)
The t-value also defined as the signal to noise ratio. From the result computed in (14), tx is 9.458. Since the signal is much larger than the noise, the noise can be neglected according to the large t-value in this case.
V. COMPUTATION OF MOTION COORDINATION Without optimization algorithm, the whole motion coordination will not be able to function. Therefore, the parameters affecting PSO converging speed, rate of success and fitness index will be discussed in this section.
Figure 5. Comparison of simulation and implementation.
PSO uses six optimizing variables to perform the Kino- dynamic planning. The variables include sight angle, sight distance, linear speed without obstacle, the linear speed with obstacle, rotational speed without obstacle and rotational speed without obstacle. The fitness is calculated through simulation and fitness function shown in (4), (5) and (6). The focus of this study is to determine the improvements on global fitness convergent when acceleration coefficients are changed from static to dynamic, as mentioned in Section I.
This section analyzes the performance of PSO and Dynamic Coefficient PSO (DCPSO) by employing both methods to perform motion coordination on three different scenarios.
Under nominal condition, DCPSO and conventional PSO will be employed to perform swarm robotics motion coordination. The gradient descendent method is also employed as a control condition because such method has been widely used in a simple multi-dimensional optimization problem.
A. Performance Distribution in Nominal Condition In the nominal condition, there are six static obstacles and two swarm robots. All three-method described above are employed to compute the optimal motion so that both swarm robots are able to move from initial position to destination. It is noticed that there will be a large number of different routes for the robot to complete the assigned task, hence the points which satisfy the minimal fitness will be abundant.
The experiment carried out consist of both static and dynamic PSO with population number 50 and gradient descendent method with random initial position. Each of
the methods perform motion coordination in mapping for 3,000 times. The distribution of optimization time for each method is recorded and plotted into a graph as shown in Figure 7. The figure shows the results of the optimization time of DCPSO, static coefficient PSO and gradient descendent method which is the conventional method for metaheuristic optimization. The vertical axis is the frequency of optimization distribution, while the horizontal axis is the optimization time required. The distribution regression is performed to obtain a smoother curve for the distribution as shown in Figure 7, where the lines are the regression of experimental results for all three methods. PSO and DCPSO used log-logistic distribution regression while the gradient descendent method uses normal distribution regression. The results of the regression are shown in Table I.
TABLE I: DISTRIBUTION REGRESSION IN NOMINAL CONDITION Statistical Variable DCPSO PSO Gradient
Descent Type of Distribution Log-Logistic Log-Logistic Normal Expected Optimization
Time 4.60 s 4.85 s 6.35 s
Variance 59.33 s2 35.62 s2 6.14 s2
Standard Deviation 7.70 s 5.97 s 7.82 s Estimated Mean Error in
Regression 1.18 % 1.30 % 2.78 %
Estimated Deviation Error 0.44 % 0.40 % 0.082 %
Figure 7. Distribution of optimization time for PSO, DCPSO and gradient descendent methods under nominal scenario.
Results from Table I are obtained by feeding data from Figure 7 into Matlab regression tool to perform regression numerically. As shown, the estimated error for the expected optimization time is less than 3.0 % for the regression. The estimated error for the standard deviation is less than 0.5 %.
B. Performance Distribution in Dynamic Condition In the dynamic condition, there are six dynamic obstacles and three swarm robots. The obstacles’ motion will be patrolling motion between two fixed points. Each robot has two cascaded destinations in the simulation map. All three methods described above are employed to compute the optimal motion so that both swarm robots can move from initial position to destination. It is noticed that there will be a lesser number of different routes for the robot to complete the assigned task in a dynamic scenario as compared to the nominal condition, hence this indicates the optimal points which satisfy minimal fitness will be harder to obtain.
Similar experiment as Part A is conducted and the results are shown in Figure 8. The rapid increment on frequency in Figure 8 is due to optimization time shorter than 6.80 seconds is very difficult to achieve. At shorter optimization time the frequency is very low since it is very rare for PSO and DCPSO to complete the motion coordination in such a short time. However, as the optimization time increases, the chances for PSO and DCPSO increase rapidly, hence the frequency demonstrates a rapid increment in the beginning in Figure 8. As the optimization time increases, the frequency decreases inverse exponentially as the chance for PSO and DCPSO decreases rapidly. Although it is rare, PSO or DCPSO require a longer time to complete the motion coordination.
Distribution regression is performed to obtain a smoother curve for the distribution as shown in Figure 8. The lines are the regression of the experimental result for all three methods. PSO and DCPSO used log-logistic distribution regression while the gradient descendent method used normal distribution regression. The results of the regression are shown in Table II.
TABLE II: DISTRIBUTION REGRESSION IN DYNAMIC SCENARIO Statistical Variable DCPSO PSO Gradient
Descent Type of Distribution Log-
Logistic Log-
Logistic Normal Expected Optimization Time 8.71 s 9.80 s 55.36 s
Variance 29.01 s2 32.68 s2 22.57 s2
Standard Deviation 5.39 s 5.72 s 4.75 s
Estimated Mean Error in
Regression 2.03 % 2.16 % 4.01 %
Estimated Deviation Error 0.28 % 0.27 % 4.01 %
During the dynamic condition, the expected optimization time difference between DCPSO and PSO is large. The mean time for DCPSO is 1.09 seconds slower than the mean optimization time of PSO. The gradient descendent method has the highest expected optimization time which is 55.36 seconds. For standard deviation, the highest standard deviation is PSO. Unlike the nominal condition, DCPSO in dynamic scenarios shows lower deviation as compared to PSO. This indicates that DCPSO is not only faster than PSO, at the same time the optimization time of DCPSO is more concentrated towards the expected time.
Figure 8. Distribution of optimization time for PSO, DCPSO and gradient descendent methods under dynamic scenario.
C. Performance Distribution in Extreme Condition In the extreme condition, there are six static obstacles, ten dynamic obstacles, and two swarm robots. The dynamic obstacles’ motion will be patrolling motion between two fixed points, each obstacle velocity is different. Each robot with two cascaded destinations. Both methods described above are employed to compute the optimal motion so that both swarm robots are able to move from initial position to destination. It is noticed that there will be a significantly lesser number of different routes for the robot to complete the assigned task in a dynamic scenario compared to the nominal condition because the robots are required to avoid static obstacles and dodge moving obstacles at the same time. Hence, it indicates the optimal points which satisfy the minimal fitness will be significantly harder to obtain.
Since there will be lesser optimal points satisfying the minimum fitness value, the ability to escape from local optimal is vital as compared to the previous two conditions.
This is because it has a significantly lesser chance to obtain the solution of motion coordination under this condition.
Therefore, the hypothesis of DCPSO will perform much better to static coefficient PSO is accepted. Similar experiment as Part A is conducted and the results are shown in Figure 9.
The rapid increment on frequency in Figure 9 is due to optimization time shorter than 17.60 seconds is very difficult to achieve. Only one rare case is considered as an outlier for DCPSO at 1.00 seconds. Hence, at short optimization time, the frequency is very low because it is very rare for PSO and DCPSO to complete the motion
coordination in such a short time. With the increment of optimization time, the chances for PSO and DCPSO also increase rapidly, hence the frequency shows a rapid increment in the beginning of Figure 9. As optimization time increases, the frequency decreases inverse exponentially because the chances for PSO and DCPSO decrease rapidly.
Although it is rare, there is a frequency change in DCPSO at 71.2 seconds.
TABLE III: DISTRIBUTION REGRESSION IN EXTREME SCENARIO
Statistical Variable DCPSO PSO
Type of Distribution Log-Logistic Log-Logistic Expected Optimization Time 16.84 s 20.42 s
Variance 7.28 s2 12.08 s2
Standard Deviation 2.70 s 3.48 s
Estimated Mean Error in
Regression 2.81 % 3.00 %
Estimated Deviation Error 8.70 % 9.23 %
In the extreme condition as shown in Table III, the expected optimization time difference between DCPSO and PSO is significantly large. The mean for DCPSO is 3.5807 second slower than the mean optimization time of PSO. For standard deviation, DCPSO shows a lower deviation as compared to PSO. This indicates that DCPSO is not only faster than PSO while optimization time of DCPSO is more concentrated towards the expected time.
Figure 9. Distribution of optimization time for PSO and DCPSO under extreme scenario.
VI. CONCLUSION
Based on the findings and contributions in this paper, this work provides a novel study to a different set-up of swarm robotic motion coordination as compared to the conventional methods. The differences in expected optimization time between DCPSO and PSO under nominal condition is DCPSO leads by 0.26 seconds, while in dynamic condition, DCPSO leads by 1.09 seconds. Under extreme condition, the difference becomes more significant, where DCPSO leads by 3.58 seconds.
ACKNOWLEDGMENT
The authors would like to acknowledge Ministry of Education Malaysia (KPM) for supporting this research under Fundamental Research Grant Scheme (FRGS), grant no. FRG0365-ICT-1/2014.
REFERENCES
[1] M. Alshibli, A. ElSayed, E. Kongar, T. Sobh, and S.M. Gupta, “A robust robotic disassembly sequence design using orthogonal arrays and task allocation,” Robotics, vol 8, issue 1, pp. 1-13, 2019, doi: 10.3390/robotics8010020.
[2] M.A. Aram, A.R. Tarik, and A.M.S. Soran, 2020, “Cat swarm optimization algorithm: A survey and performance evaluation,”
Computational Intelligence and Neuroscience, vol. 2020, pp. 1-21, 2020, doi: 10.1155/2020/4854895.
[3] J. Schwarzrock, I. Zacarias, A.L.C. Bazzan, R.Q. de Araujo Fernandes, L.H. Moreira, and E.P.D. Freitas, “Solving task allocation problem in multi unmanned aerial vehicles systems using swarm intelligence,” Engineering Applications of Artificial Intelligence, vol. 72, pp. 10–20, 2018, doi: 10.1016/j.engappai.2018.03.008.
[4] L. Jin, S. Li, H.M. La, X. Zhang, and B. Hu, “Dynamic task allocation in multi-robot coordination for moving target tracking: A distributed approach,” Automatica, vol. 100, pp. 75–81, 2019, doi: 10.1016/j.automatica.2018.11.001.
[5] S.C.K. Lye, H.T. Yew, B.L. Chua, R.K.Y. Chin, and K.T.K. Teo,
“Particle swarm optimization based resource allocation in orthogonal frequency-division multiplexing,” 7th Asia Modelling Symposium, pp. 303-308, 2013, doi: 10.1109/AMS.2013.52.
[6] J. Kennedy, and R. Mendes, "Population structure and particle swarm performance," Congress on Evolutionary Computation, 2002, doi: 10.1109/CEC.2002.1004493.
[7] J.W. Lee, N.T. Tang, K.G. Lim, M.K. Tan, B. Yang, and K.T.K. Teo,
“Enhancement of ant colony optimization in multi-robot source seeking coordination,” IEEE 7th Conference on Systems, Process and Control, pp. 1-6, 2019, doi: 10.1109/ICSPC47137.2019.9068065.
[8] J. Branke and M. Guntsch, “Solving the probabilistic TSP with ant colony optimization,” Journal of Mathematical Modelling and Algorithms, vol. 3, no. 4, pp. 403-425, 2005, doi: 10.1007/ s10852-005-2585-z.
[9] M. Brand, M. Masuda, N. Wehner, and X.H. Yu, “Ant colony optimization algorithm for robot path planning,” International Conference on Computer Design and Application, pp. 436-440, 2010, doi: 10.1109/ICCDA.2010.5541300.
[10] S. Thrun, “Robotic Mapping: A Survey,” Pittsburgh: Carnegie Mellon University, 2002.
[11] K.T.K. Teo, R.K.Y. Chin, S.E. Tan, C.H. Lee and K.G. Lim,
“Performance analysis of enhanced genetic algorithm based network coding in wireless networks,” 8th Asia Modelling Symposium, pp. 213-218, 2014, doi: 10.1109/AMS.2014.49.
[12] M. Elbanhawi, M. Simic, and R. Jazar, “Autonomous robots path planning: An adaptive roadmap approach,” Applied Mechanics and Materials, vols. 373–375, pp. 246–254, 2013, doi: 10.4028/www.scientific.net/AMM.373-375.246.
[13] S.H. Chia, K.L. Su, J.H. Guo, and C.Y. Chung, “Ant colony based mobile robot path planning,” 4th International Conference on Genetic and Evolutionary Computing, pp. 210-213, 2010, doi: 10.1109/ICGEC.2010.59.
[14] R. Rashid, N. Perumal, I. Elamvazuthi, M.K. Tageldeen, M.K.A.A.
Khan, and S. Parasuraman, “Mobile robot path planning using ant colony optimization,” 2nd IEEE International Symposium on Robotics and Manufacturing Automation, pp. 1-6, 2016, doi: 10.1109/ROMA.2016.7847836.
[15] M. Kassawat, E. Cervera, and A.P. Pobil, “Multi-robot user interface for cooperative transportation tasks,” Bioinspired Systems and Biomedical Applications to Machine Learning, 2019, doi: 10.1007/978-3-030-19651-6_8
[16] J. Eaton, S. Yang, and M. Gongora, “Ant colony optimization for simulated dynamic multi-objective railway junction rescheduling,”
IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 11, pp. 2980-2992, 2017, doi: 10.1109/TITS.2017.2665042.
[17] A. Mairaja, I.B. Asif, and Y.J. Ahmad, “Application specific drone simulators: Recent advances and challenges,” Simulation Modelling Practice and Theory, vol. 94, pp. 100-117, 2019, doi: 10.1016/j.simpat.2019.01.004.
[18] L. Bayındir, “A review of swarm robotics tasks”. Neurocomputing, vol. 172, pp. 292-321, 2016, doi: 10.1016/j.neucom.2015.05.116.
[19] S.M. Akandwanaho, A.O. Adewumi, and A.A. Adebiyi, “Solving dynamic traveling salesman problem using dynamic gaussian process regression,” Journal of Applied Mathematics, vol. 2014, pp.1-10, 2014, doi: 10.1155/2014/818529.
[20] E. Sahin, T.H. Labella, V. Trianni, J.L. Deneubourg, P. Rasse, D. Flreano, L. Gambardella, F. Mondada, S. Nolfi, and M. Dorigo,
“Swarm-bot: Pattern formation in a swarm of self-assembling mobile robots,” IEEE International Conference on Systems, Man and Cybernetics, pp. 1-6, 2002, doi: 10.1109/ ICSMC.2002.1173529.
[21] L. Iocchi, D. Nardi, and M. Salerno, “Reactivity and deliberation: A survey on multi-robot systems,” Balancing Reactivity and Social Deliberation in Multi-Agent Systems, pp. 9-32, 2001, doi: 10.1007 /3-540-44568-4_2.
[22] S.C.K. Lye, M.S. Arifianto, H.T. Yew, C.F. Liau, and K.T.K. Teo,
“Performance of signal-to-noise ratio estimator with adaptive modulation,” 6th Asia Modeling Symposium, pp. 215-219, 2012, doi: 10.1109/AMS.2012.40.
[23] G.P. Sun, H.W. Guo, D. Wang, and J.H. Wang, “A dynamic ant colony optimization algorithm for the ad hoc network routing,”
4th International Conference on Genetic and Evolutionary Computing, pp. 358-361, 2010 doi: 10.1109/ICGEC.2010.95.
[24] L. Lu, S. Huang, and W. Gu, “A dynamic ant colony optimization for load balancing in MRN/MLN,” Asia Communications and Photonics Conference and Exhibition, pp. 1-6, 2011, doi: 10.1117/12.904391.
[25] L.E. Parker, “ALLIANCE: An Architecture for fault tolerant multi- robot cooperation,” IEEE Transactions on Robotics and Automation, vol. 14, no. 2, pp. 220-240, 1998, doi: 10.1109/ 70.681242.
[26] M. Yoshikawa, and T. Miyachi, “Selective ant colony optimization considering static evaluation,” 2nd International Conference on Computer and Automation Engineering, pp. 435-440, 2010, doi: 10.1109/ICCAE.2010.5451842.
[27] R. Gross, F. Mondada, and M. Dorigo, “Transport of an object by six pre-attached robots interacting via physical links,” IEEE International Conference on Robotics and Automation, pp. 1317-1323, 2006, doi: 10.1109/ROBOT.2006.1641891.
[28] R. Miletitch, A. Reina, M. Dorigo, and V. Trianni, “Emergent naming of resources in a foraging robot swarm,” Computer Science – Robotics, pp. 1-21, 2019.
[29] A.V. Nazarova, and M. Zhai, “Distributed solution of problems in multi agent robotic systems,” Smart Electromechanical Systems, pp. 107-124, 2018, doi: 10.1007/978-3-319-99759-9_9.
[30] X. Xu, R. Li, Z. Zhao, and H. Zhang, “Stigmergic independent reinforcement learning for multi-agent collaboration,” Computer Science – Artificial Intelligence, pp. 1-13, 2019.
[31] S.S. Ray, S. Bandyopadhyay, and S.K. Pal, “Genetic operators for combinatorial optimization in TSP and microarray gene ordering,”
Applied Intelligence, vol. 26, no. 3, pp. 183-195, 2007, doi: 10.1007/s10489-006-0018-y.
[32] C. Li, Y. Li, F. Zhou, and C. Fan, “Multi-objective path planning for the mobile robot,” IEEE International Conference on Automation and Logistics, pp. 2248-2252, 2007, doi: 10.1109/ICAL.2007.4338950.
[33] Z. Yilmaz, and L. Bayindir, “Simulation of lidar-based robot detection task using ROS and Gazebo,” European Journal of Science and Technology, pp. 375-388, 2020, doi: 10.31590/ejosat.638342.
[34] C. Li, M. Yang, and L. Kang, “A new approach to solving dynamic traveling salesman problems,” Simulated Evolution and Learning, vol. 4247, pp. 236-243, 2006, doi: 10.1007/11903697_31.
[35] A.L. Alfeo, M.G.C.A. Cimino, S. Egidi, B. Lepri, A. Pentland, and G.Vaglini, “Stigmergy-based modeling to discover urban activity patterns from positioning data,” Social, Cultural, and Behavioural Modeling, pp. 292-301, 2017, doi: 10.1007/978-3-319-60240-0_35.
.