• Tidak ada hasil yang ditemukan

Local-to-global in multi-agent systems

N/A
N/A
Protected

Academic year: 2023

Membagikan "Local-to-global in multi-agent systems"

Copied!
84
0
0

Teks penuh

44 4.1 An example of a wall construction problem when there are no constraints on. 46 4.2 Example of a wall construction problem when the height of the wall is with an agent.

Multi-agents systems

Mobile agents are free to move around the area and communicate with different agents at different times. This means that communication links between static agents are fixed, while they are changeable when the system consists of mobile agents.

Agents as distributed systems

We study the performance of the multi-agent system in the presence of both static and mobile agents. Therefore, by combining the security and progress properties, we can prove that the algorithm ends up giving the correct solution.

Agents as dynamical systems

Modeling the environment around the system

Modeling multi-agent systems

Internal agent state

Communication graph

The environmental attacks can change the structure of the graph by removing vertices and/or edges from the graph. This means that the degree of the agents and the connectivity of the graph can be changed.

Global state of the system

A connected component of a graph is the largest set of nodes such that there is a path between any two nodes of the set. Agents belonging to the same connected component may after some time be part of different connected components and can no longer exchange data.

Interdisciplinary of multi-agent systems

Habitat and environmental monitoring: examples from engineering

Agents can communicate with other agents: two agents can exchange data if there exists a continuous path on the communication graph that starts at one of them and ends at the other. Examples of environmental sensors are the US Berkeley Motes that communicate with the Mica Weather Board shown in figure 1.3.

RoboFlag and RoboCup: examples from robotics

As shown in Figure 1.4, the game consists of two teams (red and blue) playing against each other. Two teams of robots, red and blue, must defend their flag while trying to capture the other team's flag.

Figure 1.2: An example of distributed system architecture where agents are distributed across a field
Figure 1.2: An example of distributed system architecture where agents are distributed across a field

Ant colony and flocks of birds: examples from biology

The objective of each team is to locate the opponent's flag, capture it, and return to home base (with the flag), while defending its own flag. Finally, one of them discovers a food source and leaves a sample of pheromone along the path from the food source to the anthill (see Figure 1.5(B)).

Figure 1.5: Some snapshots from an food hunting ant simulation taken from the Star-Logo project.
Figure 1.5: Some snapshots from an food hunting ant simulation taken from the Star-Logo project.

Max-min optimization problems

Consensus problems: average-type problems

The goal of the system is to calculate the weighted average of the agents' values ​​in a distributed manner, i.e. the right part of the image shows the solution of the problem with the values ​​of theVirespect to the tool functions.

Figure 1.6: An instance of the problem of finding the average of a set of values. The states of the agents are the values stored inside the nodes and the graph shows the communication links.
Figure 1.6: An instance of the problem of finding the average of a set of values. The states of the agents are the values stored inside the nodes and the graph shows the communication links.

Coordination problems: mobile-agent formation

This is because the weight Wi for each agent can be interpreted as the multiplicity of the agent in the system. The problem of finding the weighted average can be reduced to the simple average problem by replacing each agent with Wi copies of the agents with valueVi.

Figure 1.9: An instance of a not equidistance problem on the line. Here, agents are randomly distributed between their left and right adjacents and they cannot pass each other
Figure 1.9: An instance of a not equidistance problem on the line. Here, agents are randomly distributed between their left and right adjacents and they cannot pass each other

Contributions of the thesis

Literature review

A group is a set of agents cooperating on a common task, where each agent in the group can communicate with and access the states of all other agents in its same group. At each step of the computation, some agents may decide to leave and others to join the group. The state of a group is the set of states of the agents in the group.

Self-similar algorithms

For example, if the cluster is fully connected, all agents can receive data from other agents in the cluster at constant time; instead, if the array is a line, each extreme agent takes a time proportional to the size of the array to retrieve the data from the other extreme. As the computation continues, the agents change their state according to an algorithm based on the data of the agents in the same group. The function on the left is not nearly concave; if we take x1, x2 as a pair of points, we get that the value of the function atz is less than the values ​​of the function atx1, x2.

Examples of self-similar algorithms

In general, admissible functions may not be monotonic, the maximum may not be unique, and may not be differentiable everywhere. The blue functions satisfy the constraints in Theorem 1, while the red functions do not satisfy some of the constraints. In the next chapters, we examine the behavior of self-similar algorithms for these problems.

Figure 2.1: Examples of acceptable and non acceptable utility functions. The two function in blue are acceptable, while the red function are not acceptable.
Figure 2.1: Examples of acceptable and non acceptable utility functions. The two function in blue are acceptable, while the red function are not acceptable.

When self-similar technique does not work

When a group meets, agents calculate the circumscribing circle around the circles of the agents in the group and update their own circle with the new one. In this case we can calculate the convex hull of the set of points and derive the minimum circumscribing circle from it. The problem of calculating the linear regression line from a set of points also falls into the class of problems where a naive application of the self-comparative technique fails.

Figure 2.3: A sequence of snapshots of the state of the system for the problem of finding the second smallest value
Figure 2.3: A sequence of snapshots of the state of the system for the problem of finding the second smallest value

Model of failure in group operations

Propagation delay in group operations

Convergence and termination detection

In the paper [7], the authors provide a proof of the accuracy of similar algorithms for the class of max-min problems with quasi-concave utility functions. Assuming a continuous state space, we have that self-similar algorithms converge to the solution, but they never terminate, because they can always make small changes to the state. Therefore, when the algorithm terminates, the state of the system is arbitrarily close to the current solution.

Implementation issues

In the equationGis, the communication graph of the system and the function type yields the vector in non-decreasing order, where the total ordering defined for the variant function is given by the lexicographic ordering. Following the same approach in [7], we can discretize the state space, allowing the values ​​to be changed by a certain step value. Then we have the algorithm ends when no change can improve the current state.

Performance analysis

This chapter presents performance analysis on the naive self-similar algorithm that finds the mean of a set of values. When a group gathers, each agent calculates the average of its value and the values ​​received from all the other agents, and then updates its value with the newly calculated average. Formally, each agent calculates the sum of the received valuesS and the group utility function g (given by the line crossing the origin with slope equal to k1, where is the group size) and updates its value with the one given by f−1( g(S)).

Impact of group size on performance

In the first experiment, we investigate the relationship between the number of group operations (needed to converge to the true mean) and group size for a fully connected network. The group size is on the x-axis, while the time to convergence on a logarithmic scale is on the side axis. This trend changes dramatically when each operation takes an amount of time proportional to the square of the group size (O(k2), where is the size of the group).

Figure 3.1: Number of group operations to reach the steady-state vs. group size for the problem of finding the average of a set of values
Figure 3.1: Number of group operations to reach the steady-state vs. group size for the problem of finding the average of a set of values

Impact of abortion probabilities on performance

In this experiment, when δ is very large (equal to 0.5), the number of group operations increases dramatically with group size; therefore, we should prefer small group sizes because they are completed and committed more often than large group sizes. Whenδis in the interval between 0.05 and 0.1, we see that the number of operations is a parabolic function of the group size; it first decreases and then increases, reaching its minimum at a group size equal to 10 whenδ= 0.05 and equal to 25 whenδ= 0.1. However, Asδ increases to 0.05 and only when the group size is less than 20 does the algorithm converge;

Figure 3.3: Time complexity of the self-similar algorithm versus group size for the problem of finding the average of a set of values assuming group operations to be proportional to the square of the group size
Figure 3.3: Time complexity of the self-similar algorithm versus group size for the problem of finding the average of a set of values assuming group operations to be proportional to the square of the group size

Impact of group locality on performance

In Figure 3.7, we present the results for the case when the time complexity of group operations is proportional tok2. In figure 3.6 we show the behavior of the simulation when the time complexity of a group. For the random heuristic, we get that the time complexity decreases as the pool size increases.

Figure 3.7: Number of group operations vs. group size for the average problem when group operation can fail according to equation 2.2 and the time complexity of group operation is proportional to square of the group size
Figure 3.7: Number of group operations vs. group size for the average problem when group operation can fail according to equation 2.2 and the time complexity of group operation is proportional to square of the group size

Comparison with an algorithm for synchronous time-stepped network

A synchronous algorithm for the average problem

The constant γ describes the rate at which each agent updates its own estimate of the mean based on the information of its neighbors. This is a constant that must be less than or equal to d1, where dis is the degree of the corresponding communication graph. This rule can be expressed as follows: the new value of the agents is a linear combination of the old value of the agent and the average of the values ​​of its neighboring agents.

Some comparisons

The task of the agents in the system is to build a wall with the available resources. On the left side, the bars represent the initial amount of agents' resources. The state of unconstrained agents consists of the variable Vi which contains the amount of resources available to that agent.

Figure 3.11: Error of the synchronous algorithm as a function of simulation steps for several values of γ
Figure 3.11: Error of the synchronous algorithm as a function of simulation steps for several values of γ

A self similar algorithm for wall problem

The initial distribution of resources is shown on the left part of the figure, while the final configuration is shown on the right. If this value is positive, constrained agents have some unused resources that should be redistributed among the unconstrained agents. If it is negative, then bounded agents need more resources and need to steal some of them from the unbounded ones to reach quantity V.

Investigating the performance of the algorithm

The self-similar algorithm converges to a steady state, which is a flat wall (when the average amount of resources is less than V) and the asymmetric wall structure explained in the previous section (when more resources are available). This self-similar algorithm is easily generalized to deal with any possible wall structure.

Impact of amount of total resources

Impact of maximum wall height

Thex-axis represents the group size (ranging between 2 and 20) and the y-axis represents the number of group meetings needed to reach consensus on a logarithmic scale. Some curves in the graph refer to the time complexity of the algorithm assuming V in the range between 10 and 60. As our intuition suggests, the algorithm based on a consensus formulation outperforms the same algorithm based on a non-consensus formulation.

Impact of constrained agents

In this case, we can see from the figure that regardless of the composition of the network, the time complexity is always the same, that is, because we are solving the same optimization problem regardless of the specific number of constrained agents. In this case, as intuition suggests, the time complexity of the algorithm increases as more constrained agents are added to the network, because we add more constraints to the optimization problem.

Figure 4.4: Time complexity of the self-similar algorithm for the wall problem versus group size varying the maximum allowed height for constrained agents
Figure 4.4: Time complexity of the self-similar algorithm for the wall problem versus group size varying the maximum allowed height for constrained agents

Variation of the problem

When limited agents intermingle between unlimited agents, we have that the number of limited and unlimited agents exists in the entire group formed. This problem does not have a unique solution: any wall of height at least V that satisfies the conservation law and the bounded agent constraints is a feasible solution. The utility functions of the unconstrained and constrained agents are the identity function and the function in Equation 4.1, respectively.

A self-similar algorithm for the problem variation

This means that while constrained agents will build a wall section of height V, unconstrained ones will build a wall section of arbitrary height but greater than V. When the average resource quantity C∗ ≤V, the problem can be formulated as a consensus problem, where all agents agree on the quantity C∗. Each agent maintains the variable Vi which indicates the current amount of resources available at its position and the value V.

Performance of the algorithm

When V < 50, the instance gives a non-consensus solution, and the time complexity of the algorithm increases with V. When the problem is a non-consensus (C= 2500), as intuition suggests, the time complexity of the algorithm increases if we add more limited funds. Finally, we consider how the position of the constrained means affects the time complexity of the algorithm.

Figure 4.8: Time complexity of the self-similar algorithm for the variation of the wall problem versus the amount of resources available
Figure 4.8: Time complexity of the self-similar algorithm for the variation of the wall problem versus the amount of resources available

Modeling the problem using directed graphs

Synchronous time-stepped algorithm

Unlike a distance-based protocol, communication between agents is not necessarily bidirectional. An agent's state consists of its id, its current position stored in the variablex, and the list of agents that are allowed to send it messages.

Proof of convergence using distributed system techniques

At each iteration, simultaneously, agents collect the values ​​of their adjacencies and update their state. Proposition 1 If the positions of agents are updated according to equation 5.2, we get dath(t)≥ N1 with equality if and only if for all the position of xi =Ni. We can apply the same steps for δN(t+ 1) to obtain an equivalent expression in terms of the xi(t).

Remark about the continuous-time case

We focus on the eigenvalueλ0= 0 because a linear combination of its eigenvectors is the equilibrium point (i.e. the steady state) of the system. Theorem 5 Given the system described by graph G, the protocol in equation 5.2 solves the equidistance problem globally asymptotically. Given the constraints on permanent agent positions, v is the only possible candidate.

Generalized equidistance problem on a square

By definition, the multiplicity λ0 is equal to 2; this is because it is given by the difference between the number of country rows and the rank of L, which is the number of nonzero rows of L. Using Theorem 4, we obtain that all eigenvalues ​​of −L have negative real parts, except for the eigenvalueλ0 = 0. For the two-dimensional case the adjacency matrix A has four zero rows in the correspondence of stationary agents.

A synchronous algorithm for the bi-dimensional case

We would like to investigate mathematical modeling tools that allow expanding the graph model based on the information obtained. In Proceedings of the IEEE EMBS 2000 International Conference on Information Technology Applications in Biomedicine, p. Proceedings of the Forty-fourth IEEE Joint Conference on Decision and Control and European Control Conference, p.

A possible distributed system architecture

Another example if distributed system architecture

Mica Hardware

The RoboFlag game

Snapshots from an ant colony simulation

An instance of the problem of finding the average of a set of values

Instance of the problem of finding the average of a set of values with the agents utility

An instance of the equidistance problem on the line when agents are not allowed to

An instance of a not equidistance problem on the line when agents are not allowed to

An instance of the equidistance problem on the line when agents are allowed to pass

Acceptable and non acceptable utility functions

Snapshots for the problem of finding the minimum value among a set of value

An instance of the second smallest value problem

A sequence of snapshots for the problem of finding the smallest and the second smallest

A sequence of snapshots for the problem of finding the minimum circumscribing circle

A sequence of snapshots for the problem of finding the minimum circumscribing circle

Number of group operations to reach the steady-state vs. group size for the problem

Gambar

Figure 1.1: An example of distributed system architecture where agents are scattered on a 3- 3-dimensional area
Figure 1.3: The MICA sensor node (top left) with the Mica Weather Board developed for environ- environ-mental monitoring applications
Figure 1.2: An example of distributed system architecture where agents are distributed across a field
Figure 1.4: The RoboFlag game. Two teams of robots, red and blue, must defend their flag while attempting to capture the other team flag.
+7

Referensi

Dokumen terkait

2.3 Research hypotheses Starting from the specialized literature regarding the influence of the financial distress risk emergence on the capital market investors’ reaction, reflected