Hierarchical Needs Based Self-Adaptive Framework For Cooperative Multi-Robot System

04/23/2020 ∙ by Qin Yang, et al. ∙ 0

Research in multi-robot and swarm systems has seen significant interest in cooperation and coordination of agents in complex and dynamic environments. To effectively adapt to unknown environments and maximize the utility of the group, robots need to cooperate, share their information, and make a suitable plan according to the specific scenario. Inspired by Maslow's hierarchy of needs and systems theory, we present Robot's Need Hierarchy and propose a new framework called Self-Adaptive Swarm System (SASS) combining multi-robot perception, communication, planning and execution. To manage the conflicts during cooperation, we design a Negotiation-Agreement Mechanism based robot's need priority. We also provide several Atomic Operations to decompose the complex tasks into simple executable behaviors. Finally, we evaluate SASS through simulating static and dynamic tasks and comparing them with the state-of-the-art collision-aware task assignment method integrated into our framework.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In nature, the interaction between and within various elements of a system are complex[7]

. Simple individual agent principles acting at the global level can result in complex behavior in the swarm system. Many natural systems (e.g., brains, immune system, ecology, societies) and artificial systems (parallel and distributed computing systems, artificial intelligence, evolutionary programs) are characterized by apparently complex behaviors that emerge as a result of often nonlinear spatiotemporal interactions among a large number of component systems at different levels of organization

[21].

To maximize the chance of detecting predators, forage, and save more energy while migrating to different locations, animals usually split into different swarms to minimize their aggregated threat and maximize benefits according to different scenarios forming complex adaptive systems. These behaviors and patterns are defined as swarm intelligence, which is the collective behavior of distributed and self-organized systems [5]. So the Intelligence exhibited here is the adaptive global behaviors caused by individual agents based on its current needs and specific control laws implementing its capabilities to the maximum individual or global system utility in dynamic scenarios.

Multi-robot systems (MRS) [32] potentially share the properties of swarm intelligence in practical applications such as search, rescue, mining, map construction, exploration, etc. MRS that allows task-dependent dynamic reconfiguration into a team is among the grand challenges in Robotics [43], necessitating the research at the intersection of communication, control, and perception. Currently, planning-based approaches combined with star-shaped communication models can not generally scale or handle a large number of agents in a distributed manner [22, 9].

Fig. 1: Illustration of reactive multi-robot planning where robots move to Task 1 and Task 2 from their initial positions. During this execution, Task 3 is assigned and the robots react to this new task requirements.

Rizk [34] group heterogeneous MRS into four levels based on the complexity of the cooperative tasks and the level of automation. The fourth level of automation combines task decomposition, coalition formation, task allocation, and task execution/planning and control. However, there is not much work in the literature to address this fourth level. Therefore, to present more advanced automation, the MRS framework needs to be combined with auxiliary controllers to handle conflicts, decompose the complicated task, and adapt to dynamic changing scenarios.

In this paper, we first propose a conceptual framework Self-Adaptive Swarm System (SASS) combining the parts of the perception, communication, planning, and execution to address this gap. See Fig. 1 for an illustration of the concept. Our preliminary work published as an extended abstract in [44] introduced the problem of multi-robots fulfilling dynamic tasks using state transitions represented through a Behavior Tree [8] and laid the foundations for the contributions made in this paper, which are outlined below.

  • Robot’s Need Hierarchy: To model an individual robot’s and in the negotiation process, we introduce the prioritization technique inspired by Maslow’s hierarchy of human psychological needs [25] and solve the conflicts associated with the sub-tasks/elements in task planning. We define the Robot’s Need Hierarchy at five different levels (see Fig. 2): safety needs; basic needs (energy, time constraints, etc.); capability (heterogeneity, hardware differences, etc.); team cooperation (global utility, team performance, coordination, and global behaviors); and self-actualization (learning and self-upgrade).

  • Negotiation-Agreement Mechanism: We propose a distributed Negotiation-Agreement mechanism and planning framework for selection (task assignment), formation (shape control), and routing (path planning) in MRS, represented through a Behavior Tree (BT) [44] for automated planning of state-action sequences.

  • Atomic Operation: We decompose the complex tasks into a series of simple sub-tasks through which we can recursively achieve those sub-tasks until we complete the high-level task. We provide several Atomic Operations for the swarm behavior: , , and , which allows us to decompose a particular robot’s action plans such as flocking, pattern formation, and route planning under the same framework.

Fig. 2: Hierarchy of Robot Needs.

Ii Related Work

Swarm robotics and swarm intelligence have been well studied in the literature [16]. Multi-robot modeling and planning algorithms are among those well-studied topics yet require task-specific or scenario-specific application limitations. Martinoli [24] presents the modeling technique based on rate equations, a promising method using temporal logic to specify and possibly prove emergent swarm behavior by Winfield et al. [41]. Soysal and Sahin apply combinatorics, and linear algebra is deriving a model for an aggregation behavior of swarms [36]. Some studies also applied control theory to model and analyzed multi-robot and swarm systems [14, 12]. Recently, Otte et al. [29] discussed various auction methods for multi-robot task allocation problem in communication-limited scenarios where the rate of message loss between the auctioneer and the bidders are uncertain.

From the multi-agent systems perspective, one of the pioneering works, especially in the distributed artificial intelligence, includes [35], where the authors defined the Contract Net Protocol (CNP) for decentralized task allocation. Aknine [2] extended this idea to manager agents and contractor agents negotiation. A protocol for dynamic task assignment (DynCNET) has been developed by Weyns [40]. However, these methods rely on a central agent (such as an auctioneer or a contractor) for the design of the negotiation protocol and a supportive framework. Also, they do not generally consider the changes in the agents’ status, which restricts the direct applicability in real-world scenarios but only with adaptation.

Importantly, unlike distributed robotic systems, MRS emphasizes a large number of agents and promotes scalability, for instance, by using local communication [17], which plays a vital role in the whole system building various relationships between each robot to adapt to different environments and situations. In Tab. I, we present a comparison of SASS against common MRS frameworks that focus on task allocation and planning at different automation levels.

Approach Ref. HET COOP COM NEGO DEC Learning DIST Scenario Scale Problems
Patrol Coverage Formation Navi.
ABBA [18] dynamic 10
CHARON [13] static 10
Token Passing [11] dynamic 10
Teamcore [37] dynamic 10
ALLIANCE [31] dynamic 20
ASyMTRe [30] dynamic 20
BITE [19] dynamic 10
DIST Layered [15] dynamic 10
STEAM [38] dynamic 100
Market-Based [10] dynamic 20
Hierarchy-Based [23] static 20
SASS Ours dynamic
HET: Heterogeneous, COOP: Cooperation, COM: Communication, NEGO: Negotiation, DEC: Decision Making, DIST: Distributed Agents, Navi: Navigation.
TABLE I: Comparison of typical multi-robot system frameworks in the literature.

In multi-robot planning and control, several studies focus on navigating a robot from an initial state to a goal state [39, 3, 42]. So the individual robot’s entire route planning is usually computed in high-dimensional joint configuration space. Since formations are stricter, requiring robots to maintain certain relative positions as they move through the environment, the flocking problem could be viewed as a sub-case of the formation control problem, requiring robots to move only minimal requirements for paths taken by specific robots [28]. The research question here is how to design suitable local control laws for each robot to complete the globally assigned tasks efficiently and cooperatively and how paths can be planned for permutation-invariant multi-robot formations [20]. Earlier, the solutions for such problems in flocking and formations are based on local interaction rules [33] or behavior-based approaches [26, 4]. More recent approaches focus on proving stability and convergence properties in multi-robot behaviors basing on control-theoretic principles [6, 27].

In summary, to develop more advanced and intelligent MRS, there is a need for realizing the fourth level of the autonomous system, which combines task decomposition, group formation, planning, and control [34]. More importantly, MRS should be able to adapt to the dynamic changes in the environment and task assignments.

Iii Approach Overview

We design a simple scenario to implement SASS and distributed algorithms. In our scenarios, a group of swarm robots will cooperate to complete some tasks. Since the task assignments change dynamically, the robots need to change their plans and adapt to the new scenario to guarantee the group utility.

In our framework, we decompose the complex tasks into a series of simple sub-tasks and recursively achieve those sub-tasks until the entire task is completed. Based on this idea, we can divide the whole process into three steps: selection, formation, and routing. This process can be illustrated as Behavior Tree that integrates the sense-think-act cycle [44].

In the first step, the robots are partitioned into one or more groups to perform multiple tasks such as surveillance and patrolling. Then, they will compute the placement within a formation shape (we assume a circular shape, but this can be relaxed) at each task to circle the area of the task location. Finally, the robots choose a suitable path to get to the goal point in that formation. When the new tasks are assigned, these robots need to split-up to form new groups or merge into an existing group. Accordingly, we divide the proposed SASS into four parts.

  • Perception

    • Sensing the environment through individual robot’s sensor (knowing the current state of itself).

  • Communication

    • Communicating with other robot (knowing the current state of other robots).

  • Planning

    • Computing the potential planning according to their current state (Sensing and Communication).

      • Selection of a suitable task and forming a group.

      • Formation of a desired shape within the group.

      • Routing plan to the task and formation.

    • Negotiating with other robots for planning.

    • Agreement with other robots’ plan for no conflicts.

  • Execution of the selection, formation, and routing process after agreement.

Each robot will first verify that there is a task assigned. If assigned, it will compute an appropriate plan according to its current state and needs. Then, it will communicate with other robots and perform the negotiation and agreement process until there are no conflicts. Finally, the robots will execute their plans. This process is continuously repeated as a loop in a behavior tree, which processes the flow from left to right.

The proposed framework is formalized using the tuple =, , , . Here, =, , , , where represents perception, represents plan, represents negotiation, and represents agreement. The subscript represent the number of iterations for each process, which is finite. is a set of individual robot’s actions (behaviors). The transition function that maps states (conditions) to actions (behaviors) is , defined as . represent the accept state.

After the perception phase, a robot could have plans , where each plan depend on each other requiring sequential execution. Therefore, each plan has negotiation and agreement phases separately.

To model an individual robot’s motivation and needs in the negotiation process, we introduce the priority queue technique inspired by Maslow’s hierarchy of human psychological needs [25]. We define a robot’s need hierarchy at several levels, as shown in Fig 2. The lowest level represents the robot’s safety needs so that in all scenarios, a robot should firstly consider the safety issues (including human-safe operation) like avoiding conflicts, adversarial attacks, and so forth. When the situation satisfied the robot’s safety needs, then the robot will consider its basic but vital needs, such as energy and time availability. Then, it will review its capabilities against the task requirements are subsuming the task priority considerations. In the fourth level, the robot finds rewards and utility costs (such as distance and communication) for cooperation within a group. The fifth level is reserved for self-actualization (we do not consider this in this paper) but could potentially allow multi-robot learning strategies. For instance, after finishing the tasks, robots can upgrade their capabilities based on their experiences through the learning and evolution process. It’s worth noting that the basic and safety needs for robots are flipped compared to the human needs in Maslow’s hierarchy.

The needs at the low level are the precondition of entering higher levels. If a robot cannot satisfy its low-level needs, it will change its behaviors accordingly. We design the priority queue to abstract this model and implement it in the negotiation process of our multi-robots planning framework.

Also, the individual robot’s current needs can dynamically change according to different scenarios. For example, in the normal state, the robot’s behaviors and plans could maximize the task’s requirement and minimize its energy. But in cases of conflicting plans with other robots (e.g., potential collision with a neighboring robot), it will plan such that the safety needs are guaranteed. Similarly, if the robot runs out of battery, it also needs to find its basic needs first (e.g., recharging battery) before completing the task.

Iv Algorithms for Self-Adaptive Swarm System

SASS considers the following modules: Perception; Communication; Planning; Negotiation; Agreement, and Execution. We will discuss them separately below.

1. Perception: Each robot uses various on-board (local) sensors for localization, mapping, and recognizing objects/obstacles in the environment.

2. Communication: The process of communication between robots includes broadcasting and reception of the robot’s messages (state) to/from other robots.

3. Planning: In the planning stage, we divide this process into three steps Selection, Formation, and Routing. These atomic operations are illustrated in Fig. 3 with an example planning problem. In each step, we also introduce a priority queue technique, which can help individual robot negotiate with other robots and get an agreement efficiently.

3A. Selection Planning: In our scenario, we assume that each task has an representing its priority (level-4 in Fig. 2), the minimum number of robots required to perform the task, and a task duration (timeout). Also individual robot has an and Energy (battery level) (level-2 in Fig. 2). We assume all robots are homogeneous and hence do not implement the third level of needs (capability). However, for heterogeneous robotic systems, we would consider this priority as well.

For selection planning, we minimize the sum of the group’s energy costs to get a reasonable grouping/partitioning. Then, according to the tasks’ priority and requirement, we distribute the diverse tasks to different groups, abstracting it to The Linear Partition Problem.

Through this process, we can ensure that every group has the capability required to achieve that specific task. Since each robot computing this process in a distributed manner, the result might have conflicts with other robots depending on the uncertainty in the data it possesses and receives. For instance, one robot might have been assigned to several groups. To prevent such a situation, we use the priority queue before this process and sort this priority queue with different priority levels, until we get a unique priority queue for all the robots, which can then be used to perform non-conflicting selection planning.

Fig. 3: Illustration of Task Decomposition through planning at Selection, Formation, and Routing phases with 12 robots and 3 tasks.

3B. Formation Planning: Each robot will make a formation plan according to the selected plan. Here, each robot knows which group (task) it belongs to. To simplify the formation models, we assume the robots need to create a regular polygon surrounding the task assignment location (group’s center). The initial point will be located at the North position and follow the clockwise order to arrange the other point assignments within the formation by minimizing the system utility as mentioned before (we use distance as utility cost and energy level for prioritizing the robot’s needs).

3C. Route Planning: Using the local map, each robot computes the routes (path plan) and selects the shortest path getting to the goal point resulting from the Formation plan. Suppose each robot has two kinds of actions/behaviors that can be selected: one is moving at a constant speed(), the other is stop. In case some robots have a conflict in the process of making the routing plan, they will create a priority queue with all conflicting robots and share with the neighbors through local communication. Then, according to the Task and Energy priority of the robots, each robot in this queue will negotiate to decide on the corresponding actions on the robots that have conflicts. Until an agreement is reached, the priority queue is updated based on the needs hierarchy and solve the conflict.

Input :  Unsorted priority
Output :  Sorted priority
1 if Selection/Formation then
2          Create a selection/formation queue containing the priority order;
3         
4 end if
5if Routing then
6          Use to perform conflict detection;
7          Broadcast and receive potential conflict lists;
8          Compute all the disjoint sets and compare with its id getting the final conflict queue ;
9         
10 end if
11while  do
12          ;
13          Sort with ;
14          Broadcast the queue and receive the signal from other robot;
15          Call function ‘‘Selection Agreement’’;
16          if  then
17                   ;
18                   Sort the conflict part with lower-level priority queue ;
19                   Broadcast the sorted queue ;
20                  
21          end if
22         if  then
23                   Broadcast ‘‘end’’ feedback;
24                   Break;
25          end if
26         
27 end while
Algorithm 1 Selection/Formation/Routing Negotiation
Input :  Sorted priority
Output :  Execute the plan or negotiation again
1 initialization;
2 Receive all robots’ priority queue .;
3 Compare all the ;
4 if all the are equal then
5          Broadcast the signal ‘‘end’’;
6          Receive the function ‘‘Selection Plan Negotiation’’ feedback;
7          Execute the ‘‘Selection/Formation/Routing Plan’’;
8         
9else
10          Find the conflicting elements in the queue and broadcast the signal ‘‘conflict’’ with the conflict details;
11          Call function ‘‘Selection/Formation/Routing Negotiation’’;
12         
13 end if
Algorithm 2 Selection/Formation/Routing Agreement

4. Negotiation: Robots will compare the plans received from other group members with their own. For the Selection and Formation plan, the negotiation will be performed until the robots are assigned to only one group (in case of selection) or one position in the formation. Route planning involved creating a unique priority queue that avoids conflicts. Subsequently, each robot will reach an agreement on the priority queue and the corresponding plans. For example, in the selection and formation section, we use Low Energy (), which means the robots with lower battery level gets higher priority and similarly the High Energy law. We combine them with using Task-based priority queue (each task will have a specific priority in assigning the tasks) resulting in and . Also, we consider whether or not to consider the priority needs of conflicts for route planning (conflict case and No conflict case). We present the algorithmic representation of the negotiation process in Alg. 1.

For instance, in the Formation plan negotiation, we use distances between its local position and the task’s polygon points in the boundary of the formation shape. Each robot communicates this distance vector (distance to the task polygon points) with other robots in the same group that is assigned to this specific task. Each robot will compute a matrix (Level 4 Team needs - Utility cost) representing each robot’s distances to all the polygon points in the specific task. Then it will use the unique queue order to select the corresponding distance until all the group member gets the specific task goal point as long as the priority queues of Task, Energy, and Safety are satisfied. For example, the low energy robot will have a high priority to choose the point which is closest to it, thereby reducing energy consumption.

5. Agreement and Execution: If all the robots’ plans do not have conflict after the negotiation phase, they will have a final agreement per process (Selection/Formation/Routing). The algorithm for implementing the agreement process is presented in Alg. 2. After the agreement, each process is executed as and when necessary.

V Evaluation through Simulation Studies

To simulate our framework, we chose to use the “Common Open Research Emulator (CORE)” network simulator [1] since we are interested in implementing our algorithm in a network-based tool as CORE allows dynamic changes in the node/agent mobility and communication. We consider 20 robots in our simulations due to limitations in the CORE framework for an illustration of a single task assignment with 20 fully connected robots).

We suppose each robot has different battery levels in the initial state, and every moving step will cost energy. Also, every communication round and non-moving status will cost and energy, respectively. To simplify the visualization of the utility of the framework, we do not consider any obstacles. We design two scenarios – one to simulate static task assignments (all tasks are added at the initial stage) and another to simulate a dynamic task assignment (a task can be added anytime during the process).

We consider four combinations of priority: (High Energy), (Low Energy + Task Priority Order), (Task Priority + High Energy), and (Task Priority + Low Energy). For example, if we adopt a priority queue with task + low energy combination, the scenario would be to address the emergency task first and maintain robots in the field as long as possible. We intend to compare the utility and behaviors of the individual robot and the system with different priority combinations.

To compare our approach with a state of the art method, we implemented the algorithm in [42] in which the authors proposed a new method for addressing collision-aware task assignment problem using collision cone and auction-based bidding algorithms to negotiate the conflicts. Since our framework is distributed and do not have a central agent to manage the bidding process, we implemented the algorithm in [42] which provides the rewards for each robot to each task locations, which is then converted to a Utility matrix and fed to the negotiation mechanism in our framework. Therefore, we term this method as (Collision-aware Task ASsignment + Utility Matrix). Here, a robot first calculates the task’s utility based on [42], it chooses the maximum utility task based on the low energy priority law. Therefore, we compare this method only with our priority law. The experiment demonstrations are available in the attached video 111Also available at https://hero.uga.edu/research/sass.

(a) Comparison of conflict frequency.
(b) Comparison of total energy costs.
(c) Comparison of total travel distance.
Fig. 4: Experiments on static task assignments with 20 robots and 3 tasks.

V-a Static Multi-Task Assignments

We conduct ten simulation trials for each priority case in a static task assignment scenario. In every priority case, we use the same ten different initial battery levels sampled from a Gaussian distribution with a mean of

and a standard deviation of

. Fig. 4(a) and 4(b) shows the distance matrix results of four priority laws of conflict frequency with and without conflict negotiation costs comparing the CATA approach. In this experiment, the priority combination had the best performance compared with other cases. At the same time, Fig. 4(b) shows that every group almost cost one-third of the entire energy in the negotiation and agreement part of solving the conflicts. This means that more conflicts will lead to more negotiation rounds and corresponding energy consumption in communication. In Fig. 4(c), the effect of each priority law in the total system distance is also demonstrated for our finding that different needs and priorities at the individual agent level will lead to various global performances.

In the priority law of , our method had fewer conflicts compared to the method. We believe this is due to the fact that prioritization of basic needs in our approach aims to avoid conflicts at the Formation planning stage compared to the , which considers the conflicts only at the Route planning stage which would leave more room for conflicts if the task polygon points are not efficiently assigned in the formation stage itself.

(a) Comparison of conflict frequency.
(b) Comparison of Comm. & moving energy costs.
(c) Comparison of per task Comm. energy costs.
Fig. 5: Experiments on Scalability and Complexity in Static Task Assignments. Here, R15 means 15 Robots and T4 means 4 Tasks for example.

V-B Scalability and Complexity

(a) Comparison of conflict frequency.
(b) Comparison of Comm. energy costs.
(c) Comparison of Low and CATA.
Fig. 6: Experiments on the Impact of Different Priority Laws in Static Task Assignments.

To verify SASS’s scalability and the complexity of the swarm robots cooperating system, we design four scales of robots’ team implementing in different numbers of tasks: , , , . For example, means ten robots in the system, and means three different tasks are assigned.

Through Fig. 5(a), we can notice that as the number of robots and the task complexity increase, the entire system’s conflicts (conflict frequency) rise rapidly. The proportion of communication energy cost compared with the moving energy cost also increase with the increase in scale (see Fig. 5(b)). This points to the fact that the whole system spends more energy and time in the negotiation and coordination converging to a certain agreement. So considering the average communication energy cost per task (see Fig. 5(c)), if the task complexity is higher in some certain scenarios or the environment is more unstructured and unpredictable, an individual agent will spend more energy and time in communication to fulfill the tasks.

From another perspective, the fully distributed communication graph is inefficient in a large scale system of coordinating robots, especially in the swarm robots. So designing a proper communication architecture to adapt to the certain scale of agents’ group and complexity scenarios are also an important and challenging problem, which we will consider in our future work.

V-C Impact of Different Priority Laws at Different Levels

If an individual agent has different needs or motivations, it might present various conducts, which will lead to the entire system display different performance. To verify this hypothesis, we use 20 robots having the same initial battery levels based on four different priority laws comparing with the distance matrix and CATA as we discussed above. We conduct 10 trials with different initial battery levels sampled from a Gaussian distribution with mean 90% and standard deviation 30%.

Fig. 6(a) shows that implementing different priority laws causes the whole system to exhibit different energy use. As we can see, the system has better performance in and priority laws compared to , priority laws. This is because the former two lead to fewer conflicts than the latter two. We can confirm these observations through Fig. 6(b) and 6(c).

We also shuffle the task priorities (e.g., priority of T1 T2 T3 in the superscript 123) to simulate the framework considering first 3 levels (Safety + Basic + Task) in the needs hierarchy and the results are shown in Fig. 7(a) and Fig. 7(b). We can see that the distance matrix of our approach always has better performance than the CATA under the same conditions in terms of conflicts resolution and system energy.

In practical applications, an intelligent agent might dynamically change its needs or motivation according to the situations, especially in the adversarial or unpredictable environment. This may lead to chaos in the system resulting in higher energy costs and loss of system utility. So in the MRS design, letting the individual agents select suitable laws for cooperation while guaranteeing the optimal system utility is also an interesting and challenging avenue for future work.

V-D Dynamic Multi-Task Assignments

(a) Comparison of conflict frequency
(b) Comparison of total energy cost.
(c) Comparison of total battery level.
Fig. 7: Experiments on impact of different priority laws and dynamic task assignments with 20 robots and 3 tasks. (Left and Center) Comparison of conflict frequency and energy cost based on the different task’s priority in statics task assignments. E.g., In , the priority of T1 T2 T3. (Right) Comparison of total battery level of the system measured in points combining all robots energy percentage level in dynamic task assignments.

In the dynamic multi-tasks scenario, we design three kinds of dynamic tasks. One is three tasks added sequentially after every task is completed (). The rest of the two cases are two tasks appearing at the start () and the end () in the entire process. Also, we combine this with a conflict or no conflict cases.

Tasks Style Priority conflict Max Min Mean
1+1+1 High 69.42 55.52 64.18
1+1+1 Low 63.09 45.70 56.00
1+1+1 T+High 70.04 56.75 63.24
1+1+1 T+Low 63.25 49.18 56.62
1+2 T+Low 45.97 31.78 39.60
2+1 T+Low 44.69 31.66 39.07
1+1+1 T+Low - 49.88 29.48 41.20
1+2 T+Low - 32.70 23.14 28.50
2+1 T+Low - 38.53 24.72 30.36
TABLE II: Energy Level Comparison in Dynamic Task Assignments

The first experiment we consider using four different priority laws and 20 robots implementing the scenario (see Table II). Here, the initial energy level and position for each priority laws were set the same. We can observe that the and combinations achieved the best system utility as with the static assignment cases.

In the second experiment, we evaluate the impact of the conflict negotiation process by comparing the energy costs of three different tasks with and without considering conflicts. In Fig. 7(c), we notice that the negotiation cost occupies a large part in the entire system costs (hence, the higher battery level consumed when collisions are considered). We can also observe that the difference between each combination’s negotiation energy cost reflects the unstructured level of the environment, which means the difference will increase in the more chaotic and dynamic considerations.

Vi Conclusion

Our work introduces a novel framework for a self-adaptive swarm system (SASS) combining the parts of perception, communication, planning, and execution in MRS, which not only considers individual robot’s needs and action plans but also emphasizes the complex relationships created through communication between the robots. We propose Robot’s Needs Hierarchy to model the robot’s motivation and offer a priority queue in Negotiation-Agreement Mechanism avoiding plan conflicts effectively. Then, we provide several Atomic Operations to decompose the complex tasks into a series of simple sub-tasks.

SASS leaves room for many future improvements.

For instance, we plan to optimize the Communication Architecture and add decision and learning levels into individual robot’s hierarchical needs completing our robot’s needs model, which will cause the individual robots to upgrade by itself based on the learned experiences and lead to self-evolution of the whole system.

References

  • [1] J. Ahrenholz (2010) Comparison of core network emulation platforms. In 2010-Milcom 2010 Military Communications Conference, pp. 166–171. Cited by: §V.
  • [2] S. Aknine, S. Pinson, and M. F. Shakun (2004) An extended multi-agent negotiation protocol. Autonomous Agents and Multi-Agent Systems 8 (1), pp. 5–45. Cited by: §II.
  • [3] N. Ayanian and V. Kumar (2010) Abstractions and controllers for groups of robots in environments with obstacles. In 2010 IEEE International Conference on Robotics and Automation, pp. 3537–3542. Cited by: §II.
  • [4] T. Balch and R. C. Arkin (1998) Behavior-based formation control for multirobot teams. IEEE transactions on robotics and automation 14 (6), pp. 926–939. Cited by: §II.
  • [5] G. Beni and J. Wang (1993) Swarm intelligence in cellular robotic systems. In Robots and biological systems: towards a new bionics?, pp. 703–712. Cited by: §I.
  • [6] Y. Cao, W. Yu, W. Ren, and G. Chen (2012) An overview of recent progress in the study of distributed multi-agent coordination. IEEE Transactions on Industrial informatics 9 (1), pp. 427–438. Cited by: §II.
  • [7] S. Chan (2001) Complex adaptive systems. In ESD. 83 research seminar in engineering systems, Vol. 31, pp. 1–9. Cited by: §I.
  • [8] M. Colledanchise and P. Ögren (2018) Behavior trees in robotics and al: an introduction. CRC Press. Cited by: §I.
  • [9] V. R. Desaraju and J. P. How (2012) Decentralized path planning for multi-agent teams with complex constraints. Autonomous Robots 32 (4), pp. 385–403. Cited by: §I.
  • [10] M. B. Dias and A. Stentz (2000) A free market architecture for distributed control of a multirobot system. Cited by: TABLE I.
  • [11] A. Farinelli, L. Iocchi, D. Nardi, and V. A. Ziparo (2006) Assignment of dynamically perceived tasks by token passing in multirobot systems. Proceedings of the IEEE 94 (7), pp. 1271–1288. Cited by: TABLE I.
  • [12] J. T. Feddema, C. Lewis, and D. A. Schoenwald (2002) Decentralized control of cooperative robotic vehicles: theory and application. IEEE Transactions on robotics and automation 18 (5), pp. 852–864. Cited by: §II.
  • [13] R. Fierro, A. Das, J. Spletzer, J. Esposito, V. Kumar, J. P. Ostrowski, G. Pappas, C. J. Taylor, Y. Hur, R. Alur, et al. (2002) A framework and architecture for multi-robot coordination. The International Journal of Robotics Research 21 (10-11), pp. 977–995. Cited by: TABLE I.
  • [14] V. Gazi and K. M. Passino (2003) Stability analysis of swarms. IEEE transactions on automatic control 48 (4), pp. 692–697. Cited by: §II.
  • [15] D. Goldberg, V. Cicirello, M. B. Dias, R. Simmons, S. Smith, T. Smith, and A. Stentz (2002) A distributed layered architecture for mobile robot coordination: application to space exploration. In The 3rd International NASA Workshop on Planning and Scheduling for Space, Cited by: TABLE I.
  • [16] H. Hamann and H. Wörn (2008) A framework of space–time continuous models for algorithm design in swarm robotics. Swarm Intelligence 2 (2-4), pp. 209–239. Cited by: §II.
  • [17] H. Hamann (2018) Swarm robotics: a formal approach. Springer. Cited by: §II.
  • [18] D. Jung and A. Zelinsky (1999) An architecture for distributed cooperative planning in a behaviour-based multi-robot system. Robotics and Autonomous Systems 26 (2-3), pp. 149–174. Cited by: TABLE I.
  • [19] G. A. Kaminka and I. Frenkel (2005) Flexible teamwork in behavior-based robots. In Proceedings Of The National Conference On Artificial Intelligence, Vol. 20, pp. 108. Cited by: TABLE I.
  • [20] S. Kloder and S. Hutchinson (2006) Path planning for permutation-invariant multirobot formations. IEEE Transactions on Robotics 22 (4), pp. 650–665. Cited by: §II.
  • [21] S. A. Levin (1998) Ecosystems and the biosphere as complex adaptive systems. Ecosystems 1 (5), pp. 431–436. Cited by: §I.
  • [22] W. Luo, N. Chakraborty, and K. Sycara (2016) Distributed dynamic priority assignment and motion planning for multiple mobile robots with kinodynamic constraints. In 2016 American Control Conference (ACC), pp. 148–154. Cited by: §I.
  • [23] H. Ma, W. Hönig, L. Cohen, T. Uras, H. Xu, T. S. Kumar, N. Ayanian, and S. Koenig (2017) Overview: a hierarchical framework for plan generation and execution in multirobot systems. IEEE Intelligent Systems 32 (6), pp. 6–12. Cited by: TABLE I.
  • [24] A. Martinoli (1999) Swarm intelligence in autonomous collective robotics: from tools to the analysis and synthesis of distributed control strategies. Ph.D. Thesis, Citeseer. Cited by: §II.
  • [25] A. H. Maslow (1943) A theory of human motivation.. Psychological review 50 (4), pp. 370. Cited by: 1st item, §III.
  • [26] M. J. Mataric (1993) Designing emergent behaviors: from local interactions to collective intelligence. In Proceedings of the Second International Conference on Simulation of Adaptive Behavior, pp. 432–441. Cited by: §II.
  • [27] R. M. Murray (2007) Recent research in cooperative control of multivehicle systems. Journal of Dynamic Systems, Measurement, and Control 129 (5), pp. 571–583. Cited by: §II.
  • [28] R. Olfati-Saber (2006) Flocking for multi-agent dynamic systems: algorithms and theory. IEEE Transactions on automatic control 51 (3), pp. 401–420. Cited by: §II.
  • [29] M. Otte, M. J. Kuhlman, and D. Sofge (2019) Auctions for multi-robot task allocation in communication limited environments. Autonomous Robots, pp. 1–38. Cited by: §II.
  • [30] L. E. Parker and F. Tang (2006) Building multirobot coalitions through automated task solution synthesis. Proceedings of the IEEE 94 (7), pp. 1289–1305. Cited by: TABLE I.
  • [31] L. E. Parker (1998) ALLIANCE: an architecture for fault tolerant multirobot cooperation. IEEE transactions on robotics and automation 14 (2), pp. 220–240. Cited by: TABLE I.
  • [32] L. E. Parker (2008) Multiple mobile robot systems. Springer Handbook of Robotics, pp. 921–941. Cited by: §I.
  • [33] C. W. Reynolds (1987) Flocks, herds and schools: a distributed behavioral model. Vol. 21, ACM. Cited by: §II.
  • [34] Y. Rizk, M. Awad, and E. W. Tunstel (2019) Cooperative heterogeneous multi-robot systems: a survey. ACM Computing Surveys (CSUR) 52 (2), pp. 1–31. Cited by: §I, §II.
  • [35] R. G. Smith and R. Davis (1981) Frameworks for cooperation in distributed problem solving. IEEE Transactions on systems, man, and cybernetics 11 (1), pp. 61–70. Cited by: §II.
  • [36] O. Soysal and E. Şahin (2006) A macroscopic model for self-organized aggregation in swarm robotic systems. In International Workshop on Swarm Robotics, pp. 27–42. Cited by: §II.
  • [37] M. Tambe, D. V. Pynadath, N. Chauvat, A. Das, and G. A. Kaminka (2000) Adaptive agent integration architectures for heterogeneous team members. In Proceedings Fourth International Conference on MultiAgent Systems, pp. 301–308. Cited by: TABLE I.
  • [38] M. Tambe (1997) Towards flexible teamwork. Journal of artificial intelligence research 7, pp. 83–124. Cited by: TABLE I.
  • [39] G. Wagner and H. Choset (2015) Subdimensional expansion for multirobot path planning. Artificial Intelligence 219, pp. 1–24. Cited by: §II.
  • [40] D. Weyns, N. Boucké, T. Holvoet, and B. Demarsin (2007) Dyncnet: a protocol for dynamic task assignment in multiagent systems. In First International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2007), pp. 281–284. Cited by: §II.
  • [41] A. F. Winfield, J. Sa, M. Fernández-Gago, C. Dixon, and M. Fisher (2005) On formal specification of emergent behaviours in swarm robotic systems. International journal of advanced robotic systems 2 (4), pp. 39. Cited by: §II.
  • [42] F. Wu, V. S. Varadharajan, and G. Beltrame (2019) Collision-aware task assignment for multi-robot systems. In 2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), pp. 30–36. Cited by: §II, §V.
  • [43] G. Yang, J. Bellingham, P. E. Dupont, P. Fischer, L. Floridi, R. Full, N. Jacobstein, V. Kumar, M. McNutt, R. Merrifield, et al. (2018) The grand challenges of science robotics. Science Robotics 3 (14). Cited by: §I.
  • [44] Q. Yang, Z. Luo, W. Song, and R. Parasuraman (2019) Self-Reactive Planning of Multi-Robots with Dynamic Task Assignments. In IEEE International Symposium on Multi-Robot and Multi-Agent Systems (MRS) 2019, Note: Extended Abstract Cited by: 2nd item, §I, §III.