A Unifying Survey of Reinforced, Sensitive and Stigmergic Agent-Based Approaches for E-GTSP

08/24/2012 ∙ by Camelia-M. Pintea, et al. ∙ 0

The Generalized Traveling Salesman Problem (GTSP) is one of the NP-hard combinatorial optimization problems. A variant of GTSP is E-GTSP where E, meaning equality, has the constraint: exactly one node from a cluster of a graph partition is visited. The main objective of the E-GTSP is to find a minimum cost tour passing through exactly one node from each cluster of an undirected graph. Agent-based approaches involving are successfully used nowadays for solving real life complex problems. The aim of the current paper is to illustrate some variants of agent-based algorithms including ant-based models with specific properties for solving E-GTSP.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A large number of combinatorial optimization problems are

-hard. Nowadays, approximation and heuristic algorithms are used widely in order to find near optimal solutions of difficult problems, within reasonable running time. Heuristics are among the best strategies in terms of efficiency and solution quality for complex problems.

The Generalized Traveling Salesman Problem (GTSP) introduced in Laporte and Nobert (1983) and Noon and Bean (1991) is also a complex and difficult problem. A variant of GTSP, E-GTSP where means ”equality” is named generally just GTSP in the current paper. In E-GTSP exactly one node from a cluster is visited.

Several approaches were considered for solving the GTSP. In Fischetti et al. (2002a) a branch-and-cut algorithm for Symmetric GTSP is described and analysed. In Cacchiani et al. (2010) is shown one the most recent paper in this area. The paper proposes a multistart heuristic () which iteratively starts with a randomly chosen set of vertices and applies a decomposition approach combined with improvement procedures.

A random-key genetic algorithm (

rkGA) for the GTSP is described in Snyder and Daskin (2006). The rkGA combines a genetic algorithm with a local tour improvement heuristic with the solutions encoded using random keys Snyder and Daskin (2006). Another genetic algorithm approach for solving GTSP is described in Silberholz and B.L.Golden (2007). The state-of-art GTSP memetic algorithm, proposed in Gutin and Karapetyan (2010), exploited a strong local search procedure together with a well tuned genetic framework. In Renaud and Boctor (1998) it is proposed an efficient composite heuristic for the Symmetric GTSP. The heuristic has three phases. First is constructing the initial partial solution. It follows the insertion of a node from each non-visited node-subset and in the third phase is a solution improvement phase Renaud and Boctor (1998).

There are significant achievements in the area of local search algorithms for the GTSP. In Karapetyan and Gutin (2012) is provided an exhaustive survey of GTSP local search neighbourhoods and proposed efficient exploration algorithms for each of them. Another effective GTSP local search procedure Karapetyan and Gutin (2011) is an adaptation of the well known Lin-Kernighan heuristic. A hybrid ACS approach using an effective combination of two local search heuristics of different classes is introduced in Reihaneh and Karapetyan (2012).

GTSP has several applications to location and telecommunication problems. More information on these problems and their applications can be found in Fischetti et al. (1997, 2002a); Laporte and Nobert (1983). Other applications are in routing problems Pintea et al. (2011); P.C.Pop et al. (2009). Hybrid heuristics are valuable instruments for solving large-sized problems. That is why several heuristics, including variants of ant-based algorithms are improved using different techniques. Some features of agents are involved as: the level of sensibility, direct communications, the capability to learn and stigmergy.

Based on one of the best Ant Colony Optimization techniques, Ant Colony System (ACS) Dorigo and Gambardella (1996), in Pintea et al. (2006) was first introduced, ACS for solving GTSP. Using some MAX-MIN Ant System’s Stützle and Hoos (1997) features and some new updating rules an reinforced ACS algorithm for GTSP was introduced in Pintea et al. (2006). Computational results are reported for several test problems. The proposed algorithm was competitive with already proposed heuristics for the GTSP. Several new heuristics involving agents properties were also introduced: Sensitive Ant Colony System (SACS), Sensitive Robot Metaheuristic (SRM) and Sensitive Stigmergic Agent System (SSAS). There are used two type of sensitive heuristics for solving GTSP.

Sensitive ACS (SACS) Chira et al. (2007a) heuristic uses the sensitive reactions of ants to pheromone trails. Each agents is endowed with certain level of sensitivity allowing different types of responses to pheromone trails. The model involves search exploitation and search exploration in order to solve for complex problems. Numerical experiments illustrated in Chira et al. (2007a) shows the potential of the SACS model.

Sensitive Robot Metaheuristic (SRM) Pintea et al. (2008) uses virtual autonomous robots in order to obtain improved solutions of SACS. In SSAS Chira et al. (2007b) the agents adopt a stigmergic behaviour in order to identify problem solutions and have the possibility to share information about dynamic environments improving the quality of the search process. Using an Agent Communication Language (ACL) Wooldridge and Dunne (2005); Russell and Norvig (2003) the agents communicate by exchanging messages. This information obtained directly from other agents is important in the search process.

The paper is organized as follows. Section 2 provides a description and a mathematical model of the Generalized Traveling Salesman Problem. In Section 3 are illustrated the proposed agent-based models. Comparative numerical results and statistical analysis for the agent-based techniques involved for solving GTSP are illustrated in Section 4. The paper concludes with further research directions.

2 The GTSP description

The current section includes a description of the Generalized Traveling Salesman Problem including a mathematical model and its complexity.

2.1 A mathematical model of GTSP

The mathematical model of GTSP follows. There is considered the complete undirected graph with nodes. The graphs edges are associated with non-negative costs. The cost of an edge is denoted by .

The generalization of TSP implies an existing partition of set . The subsets of are called clusters. Let be a partition of into clusters For example: and for all . A tour is a subset of nodes such that the subset contains exactly one node from each cluster of the graph partition.

Definition 1: The objective of the Generalized Traveling Salesman Problem is to find a minimum-cost tour.

In other words, GTSP has to find a minimum-cost tour, a subset , with exactly one node from each cluster , . GTSP involves the following decisions.

  • Choose a node subset , such that , for all

  • Find a minimum cost Hamiltonian cycle in the subgraph of induced by .

Definition 2: The GTSP is called symmetric if and only if the equality holds for every , where is the cost function associated to the edges of .

The time complexity for an exact algorithm is . In the worst case the complexity is Pop (2007). An accurate discussion about time complexity for the Generalized Traveling Salesman Problem is given in Karapetyan and Gutin (2012).

3 Agent-based approaches for solving GTSP

The following subsections will describe in detail the reinforced, sensitive, multi-agent hybrid sensitive and stigmergic agent-based approaches for solving GTSP.

In Figure 1 is an illustration of the successively development of the agent-based models and Figure 2 shows a particular example for the E-GTSP.

Figure 1: The successively development of the reinforced, sensitive and stigmergic agent-based models, starting with Ant Colony System (ACS), using an reinforcement with inner-update rule in Reinforcing Ant Colony System (RACS), involving sensitivity property for Sensitive Ant Colony System (SACS), autonomous stigmergic robots for Sensitive Robot Metaheuristic (SRM), Multi-agent System (MAS) and stigmergy in Sensitive Stigmergic Agent System (SSAS)


Figure 2: A particular example of finding a minimum-cost tour spanning a subset of nodes such that the subset contains exactly one node from each cluster of the graph partition for the Equality Generalized Traveling Salesman Problem E-GTSP.

3.1 Ant Colony System for GTSP

The first Ant Colony Optimization heuristic was Ant System (AS). The algorithm was proposed in Colorni et al. (1991); Dorigo (1992). It is a multi-agent approach used for various combinatorial optimization problems.

The algorithm, as the entire ACO framework, was inspired by the observation of real ant colonies.

In AS an artificial ant can find shortest paths between food sources and a nest. While walking from food sources to the nest and vice versa, the ants deposit on the ground a substance called pheromone. In this way a trail of pheromone is created. The real ants smells pheromone when choosing their paths. The trails with the largest amount of pheromone is chooses. This feature employed by a colony of ants can lead to the emergence of shortest paths. After a while the entire ant colony uses the shortest path.

In Ant System are used artificial agents called artificial ants which iteratively construct candidate solution to an optimization problem. The solution construction is guided by pheromone trails and the specificity of each problem information. Ant Colony System (ACS) was developed to improve Ant System making it more efficient and robust. Ant Colony System for GTSP Pintea et al. (2006) works as follows.

  • All m ants are initially positioned on nodes chosen according to some initialization rule, for example randomly. Each ant builds an initial tour by applying a greedy rule. (see Algorithm 1.1.)

  • The next node , from an unvisited cluster is chosen, when the ant is in node , depend on a variable . The node is chosen with the maximal argument from equation Eq. 2

    or using the probability from equation Eq. 

    1. While constructing its tour, the ant also modifies the amount of pheromone on the visited edges by applying the local updating rule (Eq. 4) (see Algorithm 1.2.).

  • After each step is computed the local best tour length (see Algorithm 1.3.)

  • Once all ants have finished their tour, the amount of pheromone on edges is modified again by applying the global updating rule. It is used the Ant System updating rule (Eq.5.) knowing that an edge with a high amount of pheromone is a very desirable choice. The global updating rule follows in Algorithm 1.4.

  • The solution of the problem is the shortest tour found after a given number of iterations.

The already mentioned equations are detailed in Section 3.2. The sub–algorithms (Algorithm 1.1.–1.4.) and the Ant Colony System algorithm for GTSP follows.

Algorithm 1.1. Initialization of GTSP
1: forall edges do
3: end for
4: for to do
5:    place ant k on a randomly chosen node
6:    from a randomly chosen cluster
7: end for
8: build an initial tour using a Greedy algorithm
Algorithm 1.2. Construction of a tour for GTSP
1: for to do
2:    build tour by applying nc-1 times
3:       if () then
4:          is chosen with probability (Eq. 1)
5:      else
6:         from an unvisited cluster choose node (Eq. 2)
7:         where is the current node
8:      end if
9:      apply the local update rule (Eq. 4)
10: end for
Algorithm 1.3. Compute a solution for GTSP
1: for to do
2:    compute of the tour
3: end for
4: if an improved tour then
5:     update and
6: end if
Algorithm 1.4. Global update rule for GTSP
1: forall edges do
2:    update pheromone trails (Eq.  5)
3: end for

Algorithm 1. Ant Colony System for GTSP
1: Initialization of GTSP
2: is the shortest tour and its length
3: repeat
4:    Construction of a tour for GTSP
5:    Compute a solution for GTSP
6:    Global update rule for GTSP
7: until end condition
8: return and its length

3.2 Reinforcing Ant Colony System for GTSP

An Ant Colony System for the GTSP it is introduced and detailed in Pintea et al. (2006); Pintea (2010). In order to enforces the construction of a valid solution used in ACS a new algorithm called Reinforcing Ant Colony System (RACS) it is elaborated with a new pheromone rule as in Pintea and Dumitrescu (2005) and pheromone evaporation technique as in Stützle and Hoos (1997).

Based on the mathematical model of GTSP from Section 2, let be the node from the cluster . The RACS algorithm for the GTSP works as follows:

  • Initially the ants are placed in the nodes of the graph, choosing randomly the clusters and also a random node from a chosen cluster.

  • At iteration every ant moves to a new node from an unvisited cluster and the parameters controlling the algorithm are updated.

  • Each edge is labelled by a trail intensity. is the trail intensity of the edge at time .

    An ant decides which node is the next move with a probability that is based on the distance to that node, or the cost of the edge, and the amount of trail intensity on the connecting edge. The inverse of distance from a node to the next node is known as the visibility, .

  • At each time unit evaporation takes place in order to stop the intensity of pheromone on the trails. The rate evaporation is .

    A tabu list is maintained with the purpose to forbid ants visiting the same cluster in the same tour. The ant tabu list is cleared after each completed tour.

  • In order to favour the selection of an edge that has a high pheromone value, , and high visibility value, a probability function is considered. are the unvisited neighbours of node by ant and , being the node from the unvisited cluster .

    The probability function is defined as follows:


    where is a parameter used for tuning the relative importance of edge cost in selecting the next node.

    is the probability of choosing , where is the next node, if , when the current node is .

    If the next node is chosen as follows:



    is a random variable uniformly distributed over

    and is a parameter similar to the temperature in simulated annealing, .

  • The ants guides the local search by constructing promising solutions based on good locally optimal solutions. After each transition the trail intensity is updated using the inner correction rule from Pintea and Dumitrescu (2005).(see Algorithm 2.1.)


    where is the cost of the current known best tour. In ACS Dorigo and Gambardella (1996) for GTSP the local rule is :

  • As in Ant Colony System only the ant that generate the best tour is allowed to globally update the pheromone. The global update rule is applied to the edges belonging to the best tour. The correction rule follows.


    where is the inverse cost of the best tour.

  • In order to avoid stagnation it is used the pheromone evaporation technique introduced in Ant System Stützle and Hoos (1997), if is over the value, as in equation 6.


    When the pheromone trail is over an upper bound , the pheromone trail is re-initialized.

    The pheromone evaporation is used after the global pheromone update rule. (see Algorithm 2.2.)

The RACS algorithm (see Algorithm 2) computes for a given time a sub-optimal solution, the optimal solution if it is possible and can be stated as follows. Algorithm 1.2 and Algorithm 1.4 from Section 3.1 are modified and described further in Algorithm 2.1 and Algorithm 2.2.

Algorithm 2.1. Reinforced construction of tours for GTSP
1: for to do
2:    build tour by applying nc-1 times
3:    if () then
4:       is chosen with probability (Eq. 1)
5:    else
6:      from an unvisited cluster choose node (Eq. 2)
7:      where is the current node
8:    end if
9:    apply the new local update rule (Eq.3)
10: end for
Algorithm 2.2. Reinforced global update rule for GTSP
1:forall edges do
2:   update pheromone trails (Eq. 5, Eq. 6)
3:end for
Algorithm 2. Reinforcing Ant Colony System for GTSP
1: Initialization of GTSP
2: is the shortest tour and its length
3: repeat
4:    Reinforced construction of a tour for GTSP
5:    Compute a solution for GTSP
6:    Reinforced global update rule for GTSP
7: until end condition
8: return and its length

3.3 Sensitive Ant Colony System for GTSP

The Sensitive Ant Colony System (SACS) for GTSP is based on the Heterogeneous Sensitive Ant Model for Combinatorial Optimization introduced in Chira et al. (2008). SACS was introduced in Chira et al. (2007a).

In sensitive ant-based models there are used a set of heterogeneous agents (sensitive ants) able to communicate in a stigmergic manner and take individual decisions based on changes of the environment and on pheromone sensitivity levels specific to each agent. The sensitivity variable induce various types of reactions to a changing environment.

A good balance between search diversification and exploitation can be achieved by combining stigmergic communication with heterogeneous agent behaviour.

Each agent is characterized by a pheromone sensitivity level, expressed by a real number from . The transition probabilities from ACS model Dorigo and Gambardella (1996) are changed using the values in a re-normalization process. The ACS transition probability is reduced proportionally with the PSL value of each agent in the sensitive ant-based approach Chira et al. (2008).

Extreme situations of values are:

  • When an ant is ’pheromone blind’, meaning , therefore the ant ignore completely the stigmergic information

  • When an ant has maximum pheromone sensitivity, meaning .

Low values indicate that a sensitive ant will choose very high pheromone levels moves. These ants are more independent and can be considered environment explorers and have the potential to discover in an autonomous way new promising regions. The ants with high values are able to intensively exploit the promising search regions already identified. The value can increase or decrease according to the search space encoded in the ant’s experience.

In the SACS model for solving GTSP two ant colonies are involved. Each ant is endowed with a pheromone sensitivity level. In the first colony the ants have small values () and the second colony with high values ().

The ants autonomously discover new promising regions of the solution space to sustain search diversification. The sensitive-exploiter ants normally choose any pheromone marked move. SACS for solving GTSP works as follows.

  • As in ACS and RACS, initially the ants are placed randomly in the nodes of the graph.

  • At iteration every -ant moves to a new node and the parameters controlling the algorithm are updated.

    When an ant decides which node from a cluster is the next move it does so with a probability that is based on the distance to that node and the amount of trail intensity on the connecting edge. At each time unit evaporation takes place. In order to stop ants visiting the same cluster in the same tour a tabu list is maintained.

    What differs from ACS and RACS models is the sensitivity feature. The sensitivity level is denoted by and its value is randomly generated in .

    For ants the sensitivity parameter is in , where .

  • The trail intensity is updated Chira et al. (2007a), using the local rule as following.


    where is the total number of the nodes.

  • The already mentioned steps are reconsidered by the -ant using the information of the ants. For ants values are randomly chosen in .

  • Only the ant generating the best tour is allowed to globally update the pheromone. The global update rule is applied to the edges belonging to the best tour. The correction rule is Eq.5.

A run of the algorithm returns the shortest tour found. The description of the SACS algorithm for GTSP is shown in Algorithm 3.

Algorithm 3. Sensitive Ant Colony System for GTSP
1: Set parameters, initialize pheromone trails
2: repeat
3:    Place ant k on a randomly chosen node
4:    from a randomly chosen cluster
5:    repeat
6:      Each sPSL-ant build a solution (Eq. 1,Eq. 2)
7:      Local updating rule (Eq. 7)
8:      Each hPSL-ant build a solution (Eq. 1,Eq. 2)
9:      Local updating rule (Eq. 7)
10:    until all ants have built a complete solution
11: Global updating rule (Eq. 5)
12: until end condition

3.4 SRM for solving GTSP

A particular technique, inspired from both SACS and involving autonomous robots is Sensitive Robot Metaheuristic (SRM). SRM was introduced in Pintea et al. (2008).

The model relies on the reaction of virtual sensitive autonomous robots to different stigmergic variables. Each robot is endowed with a distinct stigmergic sensitivity level. SRM ensures a balance between search diversification and intensification.

As it is detailed in Pintea et al. (2008), a stigmergic robot action is determined by ”the environmental modifications caused by prior actions of other robots”. Sensitive robots are artificial entities with a Stigmergic Sensitivity Level (SSL) which is expressed by a real number in the unit interval [0, 1].

As it is in general for agents, here, in particular, robots with small SSL values are considered explorers of the search space and are considered independent sustaining diversification. The robots with high SSL values are exploiting the promising search regions already identified by explorers. The SSL values in SRM model increase or decrease based on the search space topology encoded in the robot experience.

Now something about the stigmergic robots involved in the process of solving a combinatorial optimization problem, including GTSP.

Qualitative stigmergy Bonabeau et al. (1999); Theraulaz and Bonabeau (1999) means a different action Bonabeau et al. (1999); Theraulaz and Bonabeau (1999) and quantitative stigmergy is interpreted as a continuous variable which change the intensity or probability of future actions.

Because the robots have not the capability of ants to deposit chemical substances on their trail, a qualitative stigmergic mechanism is involved in SRM. These robots communicate using the local environmental modifications that can trigger specific actions. There is a set of so called ”micro-rules” defining the action-stimuli pairs for a homogeneous group of stigmergic robots. These rules define the robots particular behaviour and find the type of structure the robots will create Bonabeau et al. (1999); Theraulaz and Bonabeau (1999).

In Pintea et al. (2008) the algorithm is used to solve a large drilling problem, a particular GTSP problem. In the following is a detailed description of the SRM for GTSP.

  • Initially the robots are placed randomly in the search space. A robot moves at each iteration to a new node. The parameters controlling the algorithm are updated.

  • The next move of a robot is probabilistically based on the distance to the candidate node and the stigmergic intensity on the connecting edge. In order to stop increasing stigmergic intensity, evaporation process is invoked. Also, is maintained a tabu list preventing robots to visit a cluster twice in the same tour. The stigmergic value of an edge is and the visibility value is .

    As in previous sections, is the unvisited successors of node by robot and . The sSSL robots probabilistically choose the next node. is the current robot position. As in previous presented ant-based techniques the probability of choosing as the next node is given by 1.

    An autonomous robot could be in the team with high or in the team with low stigmergic sensitivity on the basis of a random variable uniformly distributed over . Let be a realization of this random variable and a constant, . The robots with small stigmergic sensitivity sSSL are characterized by the inequality while for the robots with high stigmergic sensitivity hSSL robots holds.

    A hSSL-robot uses the information given by the sSSL robots. hSSL robots choose the new node in a deterministic manner according to 2. The trail stigmergic intensity is updated using the local stigmergic correction rule:

  • Global updating the stigmergic value is the role of the elitist robot that generates the best intermediate solution.

    These elitist robots are the only robots having the opportunity to know the best tour found and reinforce this tour in order to focus future searches more effectively. This global updating rule is:


    where is the inverse value of the best tour length. Furthermore is used as the evaporation rate factor.

  • An execution of the algorithm returns the shortest tour found. The stopping criterion is given by a the maximal number of iterations ().

The description of the Sensitive Robot Metaheuristic for solving the GTSP is illustrated further in Algorithm 4.

Algorithm 4. Sensitive Robot Algorithm for GTSP
1: Set parameters, initialize stigmergic values of the trails;
2: repeat
3:   Place robot k on a randomly chosen node
4:   from a randomly chosen cluster
5:   repeat
6:    Each robot incrementally builds a solution
       based on the autonomous search sensitivity;
7:    The sSSL robots probabilistically choose
       the next node (Eq.1)
8:    A hSSL-robot uses the information supplied by
       the sSSL robots to find the new node j (Eq.2)
9:    A local stigmergic updating rule (Eq.8);
10:   until all robots have built a complete solution
11:   A global updating rule is applied
       by the elitist robot (Eq.9);
12: until end condition

3.5 Sensitive Stigmergic Agent System for GTSP

The Sensitive Stigmergic Agent System for GTSP (SSAS) introduced in Chira et al. (2007b) is based on the Sensitive Ant Colony System (SACS) Chira et al. (2007a) and Stigmergic Agent System (SAS) C.Chira et al. (2006).

In C.Chira et al. (2006) was introduced the concept of stigmergic agents where agents communicate directly and also in a stigmergic manner using artificial pheromone trails produced by agents similar with some biological systems Camazine et al. (2001). The novelty of SSAS is that the agents are endowed with sensitivity. Their advantage is that agents with sensitive stigmergy could be used for solving complex static and dynamic real life problems.

A multi-agent system () approach to developing complex systems involves the employment of several agents capable of interacting with each other to achieve objectives Jennings (2001). The benefits of include the ability to solve complex problems, interconnection and interoperation of multiple systems and the capability to handle distributed areas Wooldridge and Dunne (2005); Bradshow (1997).

The SSAS model inherits also agent properties: autonomy, reactivity, learning, mobility and pro-activeness Wooldridge (1999); Iantovics and Enachescu (2009).

The agents are able to cooperate, to exchange information and can learn while acting and reacting in their environment. Agents also are capable to communicate through an agent communication language (ACL).

If an agent has also sensitivity, stronger artificial
pheromone trails are preferred and the most promising paths receive a greater pheromone trail after some time. Within the SSAS model each agent is characterized by a pheromone sensitivity level as in Section 3.3. The SSAS is using as in SACS two sets of sensitive stigmergic agents: with small and high sensitivity values. The role of sensitive ants from SACS is taken now, more generally, by sensitive-explorer agents, with small ( agents) and sensitive exploiter agents with high ( agents).

The agents discover new promising regions of the solution space in an autonomous way, sustaining search diversification. The agents exploit the promising search regions already identified by the agents. Each agent deposit pheromone on the followed path. Evaporation takes place each cycle preventing unbounded intensity trail increasing. The SSAS model for solving GTSP is described in the following. A run of the algorithm returns the shortest tour found.

Algorithm 5. Sensitive Stigmergic Agent System for GTSP
1: Initialize pheromone trails and knowledge base
2: repeat
3:   Activate a set of agents with various PSL
4:   Place each agent in search space
5:   repeat
6:    Move to a new node each hPSL-agent (Eq. 1, Eq. 2)
7:    An agent send an ACL message
       about the latter edge formed
8:   until all hPSL-agents have built a complete solution
9:   repeat
10:    Each sPSL-agent receive and use
         the information send by hPSL-agents
         or the information available in the knowledge base
11:    Apply a local pheromone update rule (Eq. 3)
12:   until all sPSL-agents have built a complete solution
13:   Global pheromone update rule (Eq. 5)
14: until end condition

4 Evaluations of Agent-Based Algorithms for E-Gtsp

First some numerical experiments are illustrated in order to compare the already described algorithms. Based on these results and on the results from related papers are explained the advantages and disadvantages of the reinforced, sensitive and stigmergic agent-based algorithms for E-GTSP.

4.1 Computational Analysis

In order to evaluate the performance of the already mentioned algorithms are used euclidean problems converted from TSP library TSPLIB95 (2008). In order to divide the set of nodes into subsets was used the procedure proposed in Fischetti et al. (1997) as in Fischetti et al. (2002b); Gutin and Karapetyan (2010) and Karapetyan (2012). For this survey paper are used Euclidean problems of the Padberg-Rinaldi data set of city problems that can be obtained from the GTSP Instances Library Gutin and Karapetyan (2010).

In the related papers Chira et al. (2007a); Pintea (2010); Chira et al. (2007b); Pintea et al. (2006) are detailed other numeric results. The algorithms were implemented in Java and tested on a 2600+, 333Mhz with 2GB RAM.

Parameters. The parameters used for the agent-based approaches are set as follows.

  • The initial value of all pheromone trails, , the upper bound for the pheromone evaporation phase is considered , where is the solution of Nearest Neighbor algorithm (see Reihaneh and Karapetyan (2012)).

  • Other values of the parameters are , , and ten number of ants for all considered algorithms.

  • Besides the settings inherited from ACS, the SACS algorithm for GTSP uses an sensitivity parameter . The sensitivity level of hPSL ants is considered to be distributed in the interval while sPSL ants have the sensitivity level in the interval . In SRM the parameter is considered a random value in .

  • It has been tested and observed that the best results are obtained by SSAS strategies assigning low PSL values for the majority of agents. is considered for all agents.

All the solutions of agent-based approaches are the average of five successively run of the algorithm, for each problem. The maximal computational time is set by the user, in this case ten minutes.

In the following tables are compared the computational results for solving the GTSP using the ACS, Reinforced ACS (ACS), Sensitive Ant Colony System (SACS) and Sensitive Robot Metaheuristic (SRM) and Sensitive Stigmergic Agent System (SSAS). The columns in tables are as follows:

Problem: The name of the test problem. The digits at the beginning of the name give the number of clusters (); those at the end give the number of nodes ().

ACS, RACS, SACS, SRM, SSAS: The gap of mean values after five runs, returned by the already mentioned agent-based algorithms. The gap is a percentage value computed as the difference between optimal and an algorithm solution, divided with the optimal solution.

16PR76 0 0 0 0 0
22PR107 0.06 0 0.01 0.13 0
22PR124 0.30 0 0.14 0.01 0
28PR136 0.23 0 0.12 0.05 0
29PR144 1.47 0 0.04 0.14 0
31PR152 1.07 0.01 0.53 0.32 0
46PR226 2.82 0.21 1.24 1.44 0
53PR264 2.76 0.01 0.54 0.62 0
60PR299 4.25 0.53 0.26 0.24 0.13
88PR439 39.19 4.72 4.73 5.86 0.89
201PR1002 44.15 21.24 22.20 18.15 16.30
Table 1: Agent-based approaches em ACS, RACS, SACS, SRM and SSAS comparative mean results

4.2 Statistical analysis: advantages and disadvantages

In Table 1 are the mean values of five successively runs for each instance. For the smallest instances, with the number of clusters less than 40, each proposed algorithm have at least one optimal result. Between 40 and 60 all except ACS found at least once the optimal value. For a large number of clusters, over 60, the optimal value was never found, but the smallest value was found for SSAS. For the other algorithms the mean values is better than for SSAS.

The Expected Utility Approach Golden and Assad (1984) technique is employed for statistical analysis purposes. The results of the test are shown in Table 2.

Let be the percentage deviation of the heuristic solution and the best known solution of a particular heuristic on a given problem:

The expected utility function can be expressed as:

where , and . and

are the estimated parameters of the Gamma function. All values are translated with five units in order to use the current statistical analysis technique. There are considered

problems for testing, the following notations are used in Table 2:

As indicated in Table 2, SSAS model has Rank 1 (the last column in Table 2) followed by SRM and RACS model. This result emphasizes that SSAS is more accurate compared to other techniques for the considered problem instances. SSAS is using the best features from each precedent algorithm.

1.0236 8.7454 8.5434 0.1198 393.0964 5
0.5420 1.4555 2.6854 0.2018 397.0472 4
0.4858 1.3628 2.8051 0.1732 397.3482 3
0.4989 1.0521 2.1088 0.2366 397.3288 2
0.3149 0.7974 2.5322 0.1244 398.3022 1
Table 2: Statistical analysis results for compared agent-based models

Ant Colony System shows once again the stability of the model introduced by Dorigo (1992) and developed for GTSP in Pintea et al. (2006); Pintea (2010). As we can see from Table 1, RACS performs on the small instances obtaining for many instances the optimal solutions for all execution of the algorithm.

Sensitivity involved in ACS have the ability to identity good solutions for several instances and some optimal solutions too even for medium and large size instances.

The autonomus stigmergic robots from SRM seems to have good results and have chances to be improved regarding the parameter values, execution time, may be using hybrid techniques or involving Lin-Kernighan algorithm and its variants Karapetyan and Gutin (2011); Reihaneh and Karapetyan (2012). Another way to improve SRM could be making the robots working full parallel in inner loop of the algorithm.

reports better running times for best values compared to the results others models suggesting the benefits of the model heterogeneity in the search process. The SSAS model can be improved in terms of execution time and using different values for parameters. Other improvements involves an efficient combination with other algorithms or the capability of agents working in parallel.

Each complex combinatorial optimization problem has his own particularities, therefore these biological inspired techniques should be tested and used further the agent-based metaheurisic with the best results. The introduced techniques could be also used for hybrid algorithms on improving classification techniques Parpinelli et al. (2002); Stoean et al. (2009). Hybrid algorithms using these agent-based models have the chance to solve different real life NP-hard problems.

5 Conclusion

Several agent-based algorithms are involved for solving the equality Generalized Traveling Salesman Problem. Agents properties as autonomy, sensitivity, cooperation and language are strongly implied in the process of finding good solutions for the specified problem. The advantages of the reinforced, sensitive and stigmergic agent-based methods are the computational results, good and competitive with the existing heuristics from the literature. Some disadvantages are the multiple parameters used for the algorithms and the high hardware resources requirements.

6 Acknowledgement

I would like to express my appreciation to Daniel Karapetyan for his very helpful comments and valuable advices.


  • Bonabeau et al. (1999) E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm intelligence from natural to artificial systems. Oxford University Press, 1999.
  • Bradshow (1997) J.M. Bradshow. An Introduction to Software Agents, in Software Agents. MIT Press, 1997.
  • Cacchiani et al. (2010) V. Cacchiani, A.E.F. Muritiba, M. Negreiros, and P. Toth. A multistart heuristic for the equality generalized traveling salesman problem, 2010. DOI 10.1002/net.
  • Camazine et al. (2001) S. Camazine, J.L. Deneubourg, N.R. Franks, J. Sneyd, G. Theraulaz, and E. Bonabeau. Self organization in biological systems. Princeton Univ. Press, 2001.
  • C.Chira et al. (2006) C.Chira, C-M.Pintea, and D.Dumitrescu. Stigmergic agent optimization. Romanian J. Information Science Technology, 9(3):175–183, 2006.
  • Chira et al. (2008) C. Chira, D. Dumitrescu, and C-M. Pintea. Heterogeneous sensitive ant model for combinatorial optimization. In

    Genetic and Evolutionary Computation Conference (GECCO 2008), Atlanta, USA

    , pages 163–164, 2008.
  • Chira et al. (2007a) C. Chira, C-M. Pintea, and D. Dumitrescu. Sensitive ant systems in compinatorial optimization. In Special Issue Studia Informatica, KEPT 2007, pages 185–192, 2007a.
  • Chira et al. (2007b) C. Chira, C-M. Pintea, and D. Dumitrescu. Sensitive stigmergic agent systems. In Adaptive and Learning Agents and Multi-Agent Systems, ALAMAS, Maastricht, The Netherlands, MICC Tech.Report Series, 07-04, pages 51–57, 2007b.
  • Colorni et al. (1991) A. Colorni, M. Dorigo, and V. Maniezzo. Distributed optimization by ant colonies. In Proc. of ECAL-91-Euro. Conf. on Artif. Life, Paris, France, pages 134–142. Elsevier Publishing, 1991.
  • Dorigo (1992) M. Dorigo. Optimization, Learning and Natural Algorithms. PhD thesis, Dipart. di Elettronica, Politecnico di Milano, Italy, 1992.
  • Dorigo and Blum (2005) M. Dorigo and C. Blum. Ant colony optimization theory: A survey. Theoretical Computer Science, 344(2):243–278, 2005.
  • Dorigo et al. (1999) M. Dorigo, G. Di Caro, and L.M. Gambardella. Ant algorithms for discrete optimization. Artificial Life, 5:137–172, 1999.
  • Dorigo and Gambardella (1996) M. Dorigo and L.M. Gambardella. Ant colony system: A cooperative learning approach to the traveling salesman problem algorithms for discrete optimization. IEEE Trans. on Systems, Man, and Cybernetics Life, 26:29–41, 1996.
  • Fischetti et al. (1997) M. Fischetti, J.J.S. Gonzales, and P. Toth. A branch-and-cut algorithm for the symmetric generalized travelling salesman problem. Oper. Res., 45(3):378–394, 1997.
  • Fischetti et al. (2002a) M. Fischetti, J.J.S. Gonzales, and P. Toth. The Generalized Traveling Salesman and Orienteering Problem. Kluwer, 2002a.
  • Fischetti et al. (2002b) M. Fischetti, J.J. Salazar Gonzalez, and P. Toth. Gtsp instances, 2002b. Available electronically at http://www.cs.rhul.ac.uk/home/zvero/GTSPLIB/.
  • FIPA (2002) FIPA. Foundation for Intelligent Physical Agents, 2002. Available electronically at http://www.fipa.org.
  • Golden and Assad (1984) B.L. Golden and A.A. Assad. A decision-theoretic framework for comparing heuristics. European J. of Oper.Res, 18:167–171, 1984.
  • Gutin and Karapetyan (2010) G. Gutin and D. Karapetyan. A memetic algorithm for the generalized traveling salesman problem. Natural Computing, 9:47–60, 2010.
  • Iantovics and Enachescu (2009) B. Iantovics and C. Enachescu. Intelligent complex evolutionary agent-based systems. In Proceedings of the 1st International Conference on Bio-inspired Computational Methods used for Solving Difficult Problems, pages 116–124. Springer, 2009.
  • Jennings (2001) N.R. Jennings. An agent-based approach for building complex software systems. Comms. of the ACM, 44(4):35–41, 2001.
  • Karapetyan (2012) D. Karapetyan. Gtsp instances library, 2012. Available electronically at http://www.sfu.ca/~dkarapet/gtsp.html.
  • Karapetyan and Gutin (2011) D. Karapetyan and G. Gutin. Lin-kernighan heuristic adaptation for the generalized traveling salesman problem. European Journal of Operational Research, 208:221–232, 2011.
  • Karapetyan and Gutin (2012) D. Karapetyan and G. Gutin. Efficient local search algorithms for known and new neighborhoods for the generalized traveling salesman problem. European Journal of Operational Research, 219:234–251, 2012.
  • Laporte and Nobert (1983) G. Laporte and Y. Nobert. Generalized traveling salesman problem through n sets of nodes: An integer programming approach. INFOR., 21(1):61–75, 1983.
  • Noon and Bean (1991) C.E. Noon and J.C. Bean. A lagrangian based approach for the asymmetric generalized traveling salesman problem. Oper. Res., 39:623–632, 1991.
  • Parpinelli et al. (2002) R.S. Parpinelli, H.S. Lopes, and A.A. Freitas. Data mining with an ant colony optimization algorithm. IEEE Transactions on Evolutionary Computation, 6:321–332, 2002.
  • P.C.Pop et al. (2009) P.C.Pop, C.M.Pintea, I.Zelina, and D.Dumitrescu. Solving the generalized vehicle routing problem with an acs-based algorithm. In Conference Proceedings: BICS 2008, Tg.Mures, Romania, AIP Springer, volume 1117, pages 157–162, 2009.
  • Pintea et al. (2008) C-M. Pintea, C.Chira, D.Dumitrescu, and C.P.Pop. A sensitive metaheuristic for solving a large optimization problem. In International Conference SOFSEM 2008, Springer, LNCS 4910, pages 551–559, 2008.
  • Pintea et al. (2011) C-M. Pintea, C. Chira, D. Dumitrescu, and P.C.Pop. Sensitive ants in solving the generalized vehicle routing problem. Int.J.Comm.Control, 6(4):731–738, 2011.
  • Pintea and Dumitrescu (2005) C-M. Pintea and D. Dumitrescu. Improving ant systems using a local updating rule. In Proceedings Seventh International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2005, pages 295–298, 2005.
  • Pintea et al. (2006) C-M. Pintea, P.C. Pop, and C. Chira. Reinforcing ant colony system for the generalized traveling salesman problem. In BIC-TA Proceedings, vol.Evolutionary Computing, pages 245–252, 2006.
  • Pintea (2010) Camelia-M. Pintea. Combinatorial optimization with bio-inspired computing, PhD Thesis. EduSoft, 2010.
  • Pop (2007) P.C. Pop. New integer programming formulations of the generalized traveling salesman problem. American Journal of Applied Sciences, 4:932–937, 2007.
  • Reihaneh and Karapetyan (2012) M. Reihaneh and D. Karapetyan. An efficient hybrid ant colony system for the generalized traveling salesman problem. Algorithmic Operations Research, 7:21–28, 2012.
  • Renaud and Boctor (1998) J. Renaud and F.F. Boctor. An efficient composite heuristic for the symmetric generalized traveling salesman problem. European J. of Oper. Res., 108:571–584, 1998.
  • Russell and Norvig (2003) S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson, 2003.
  • Silberholz and B.L.Golden (2007) J. Silberholz and B.L.Golden. The generalized traveling salesman problem: A new genetic algoritm approach, 2007. Extending the horizons: Advances in computing, optimization decision technologies, Springer, New York.
  • Snyder and Daskin (2006) L.V. Snyder and M.S. Daskin. A random-key genetic algorithm for the generalized traveling salesman problem. European J. of Oper.Res., 174(1):38–53, 2006.
  • Stoean et al. (2009) R. Stoean, M. Preuss, C. Stoean, E. El-Darzi, and D. Dumitrescu. Support vector machine learning with an evolutionary engine. Journal of the Operational Research Society, 60:1116–1122, 2009.
  • Stützle and Hoos (1997) T. Stützle and H.H. Hoos. The max-min ant system and local search for the traveling salesman problem. In Proc. Int. Conf. on Evol. Comp., pages 309–314. IEEE Press, Piscataway, NJ, 1997.
  • Theraulaz and Bonabeau (1999) G. Theraulaz and E. Bonabeau. A brief history of stigmergy. Artificial Life, 5:97–116, 1999.
  • TSPLIB95 (2008) TSPLIB95. Tsp library, 2008. Available electronically at http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/index.html.
  • Wooldridge (1999) M. Wooldridge. Intelligent agents. In G. Weiss, editor, An Introduction to Multiagent Systems. Wiley, 1999.
  • Wooldridge and Dunne (2005) M. Wooldridge and P.E. Dunne. The complexity of agent design problems: Determinism and history dependence. Annals of Mathematics and Artificial Intelligence, 45(3):343–371, 2005.