Synchronous Robotic Framework

06/08/2020 ∙ by Nagarathna Hema Balaji, et al. ∙ California State University Long Beach 0

We present a synchronous robotic testbed called SyROF that allows fast implementation of robotic swarms. Our main goal is to lower the entry barriers to cooperative-robot systems for undergraduate and graduate students. The testbed provides a high-level programming environment that allows the implementation of Timed Input/Output Automata (TIOA). SyROF offers the following unique characteristics: 1) a transparent mechanism to synchronize robot maneuvers, 2) a membership service with a failure detector, and 3) a transparent service to provide common knowledge in every round. These characteristics are fundamental to simplifying the implementation of robotic swarms. The software is organized in five layers: The lower layer consists of a real-time publish-subscribe system that allows efficient communication between tasks. The next layer is an implementation of a Kalman filter to estimate the position, orientation, and speed of the robot. The third layer consists of a synchronizer that synchronously executes the robot maneuvers, provides common knowledge to all the active participants, and handles failures. The fifth layer consists of the programming environment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In natural disasters, the response time for the search-and-rescue team is a critical factor to minimize the casualties. A swarm of robots, e.g. drones can be used to concurrently sweep the region to minimize the response time without risk to the search-and-rescue team [12]. Some researchers have followed a centralized approach where the robots are supervised and operated from a central control [5], [2]. However, these solutions are not scalable and are prone to failure. More robust solutions focus on fully autonomous systems, where the robots self-coordinate their actions without the help of preexisting infrastructure [1]. However, the implementation of these systems requires an interdisciplinary team with expertise in robotics, distributed systems, wireless sensor networks, etc, making the systems accessible only for experienced research teams. In this paper we present a testbed, called Synchronous Robotic Framework (SyROF), that gives access to undergraduate computer science/engineer at California State University Long Beach to rapidly implement and demonstrate robotic swarms. In essence, SyROF provides the look-compute-move model proposed by Suzuki and Yamashita in their seminar paper [14]. In our framework, robots broadcast their state infinitely often. Then, every participant robot obtains the state of every other active robot using a membership service, which can be used to compute its next movement. In the look-compute-move model, we can distinguish two main variants: 1) the fully synchronous , where all robots start the cycles at the same time [6], and 2) the asynchronous , where no assumption is made about the cycles. We choose to implement to reduce the complexity of implementing cooperative algorithms. The power of the model has been shown in [3], [18], [17]. However, SyROF  can also be used to implement the asynchronous variant. We implement a synchronizer to execute the synchronized cycle. In every robot has common knowledge, i.e., every robot knows that every other robot knows, and so on, the state of the participant robots. Thus, the output of any deterministic function is identical in all robots. However, their output can be conflicting if robots do not attain common knowledge. We consider wireless networks where messages can be dropped at any time which makes it impossible to attain common knowledge deterministically. To overcome the impossibility, we consider the stream consensus protocol proposed by Morales-Ponce et al., [8] that guarantees common knowledge at almost every time with period of disagreement of bounded length.

In this paper we describe the design of SyROF testbed that consists of multiple assorted mobile robots including omnidirectional robots, drones and rovers. Each robot is equipped with a microprocessor, a gyroscope/accelerometer sensor, a flow sensor, a GPS like sensor and a Bluetooth chip. We design and implement a real-time publish-subscribe system as a base system. Then we design and implement a Kalman filter to compute the state of the robot that reduces the noise of the sensory data. Then we implement a synchronizer to synchronize the task of the robots. On top of the synchronizer, we implement a programing interface, called virtual machine, that allows implementing Timed Input/Output Automata. The main software architecture is shown in Figure 1. Our design allows adding external sensors such as small computers for more complex applications.

Figure 1: Software architecture

The paper is organized as follows: in Section 2, we present the architecture of the robots. The formal requirements of the system are then presented in Section 3, followed by the related work in Section 4. We present the design for the real-time publish-subscribe system in Section 5, then we present the design of the Kalman filter in Section 6 and the membership service and synchronizer in Section  7. The programming interface is presented in Section 8. Finally, we conclude the paper in Section 9.

2 Robot Architecture

We design the main components of the system to be compatible with different robotic models. Currently, we have built the testbed with omnidirectional robots that are able to move in any direction without rotations. However, the components and software were designed to support different robot models, such as rovers that can rotate and move forward and backward, non-holonomic mobile robots, drones, etc.

The main components of SyROF are implemented in a Cortex M4 with a real rime operating system. In particular, we use Teensy 3.2 and FreeRTOS for the omnidirectional robots that allows executing tasks in bounded time. SyROF uses different sensors to compute the robot’s state. Namely, an MPU-9265 (Inertial Measurement Unit or IMU) to obtain the robot’s orientation, a PMW390 (optical flow sensor) to compute the distance traversed by the robots, an NRF51822 (bluetooth module) to communicate and a DWM1000 (an ultra-wide band sensor) to compute the robot’s absolute position. SyROF also controls all the actuators. To simplify the connections, we design a printed circuit board (PCB) depicted in Figure 2.

Figure 2: Autopilot PCB

Many robotic applications can be enhanced with the use of computer vision and machine learning algorithms. However, these algorithms require more computational power. Hence, we design the system to be highly configurable to include different sensors and actuators including external small computers. Throughout the paper we assume that robots are in communication range and that the clock drift of the process is bounded by a constant.

3 Requirements

In this section, we list the main functional requirements of SyROF.

  • Provide a membership service. In robotic swarms, every robot needs to be aware of every other robot that is participating in the system. This requirement adds dynamism to the system by allowing robots to join and leave at any time.

  • Synchronize robotic primitives. The time to complete a maneuver (or primitive) is non-deterministic due to inaccuracies in the actuators and sensors, battery levels, etc. Robots should synchronize primitives to provide the look-compute-move model.

  • Provide common knowledge infinitely often, even with communication faults. Common knowledge is attained when every robot knows the state of all participant robots. If all robots communicate successfully, they must switch to cooperative mode and provide their state including position, speed, intentions, etc., to every other robot. If any robot does not receive the message of at least one robot, all robots must switch to autonomous mode.

  • Provide a high level programming environment to implement cooperative robotic systems using Timed Input/Output Automata (TIOA). The programming environment should provide instructions to control the actuators, read sensory data, provide complex queries.

4 Related Work

Testbed for lowering the entrance barrier to multi-robot systems have been proposed previously, see for example [9], [13], where the the authors present a testbed, called GRITSBot and Kilobots, respectively, using inexpensive robots. In GRITSBot, a central server is responsible for executing the code in each robot. Moreover, it requires a precise tracking system to determine the position of each robot. Thus, user can design and test robotic synchronous system. However, any failure in the server affects the whole system. Unlike GRITSBot, in SyROF all the robots have the same role and, therefore, there is no single point of failure. In the Kilobot testbed robots asynchronously runs its own code. However, those robots do not have enough computational power to estimate their positions. Unlike Kilobots, our proposed testbed, robots can estimate their own position and synchronize the primitives.

Another closely related framework is ROS (Robot Operating System) [11]. A system built using ROS consists of several processes connected in a peer-to-peer topology. ROS has been very successful and many packages for a great variety of applications are available including SLAM (Simultaneously Localization and Mapping). However, ROS does not provide support to synchronize robotic primitives nor common knowledge guarantees. Our framework provides a new high-level platform-independent programming language that provides users with synchronous executions of robotic primitives and common knowledge in realtime. Moreover, SyROF  can be integrated with ROS to extend its functionalities without compromising the realtime execution.

Our synchronization mechanism is based on consensus among all robots. Many papers have based their systems on consensus where processes need to agree on specific tasks [4, 16, 15]. Another approach is to use of tuple spaces. Processes insert tuples in a common space that can be read by every process [10], but, there is no guarantee that a process is able to read.

5 Real-Time Publish-Subscribe System

In this section we present the design of the real-time publish-subscribe system that simplifies the implementation of robotic systems. The publish-subscribe system can be logically separated into three parts- the Broker, Publishers, and Subscribers as shown in Figure 3. Each process can be either publisher or subscriber or both.

  • The Broker. The main functionality of the broker is to disseminate data from the publisher to subscribers that are interested in the topic. Thus, it plays a key role for the expected functionality of the system. Publishers register through the Broker using unique identifiers and subscribers subscribe to the publisher with the Broker using the identifiers. It is assumed that identifiers are unique.

    Figure 3: Publish Subscribe System
  • Publishers. They are dedicated to produce data without knowing which processes consume it. For example, publishers can read data from sensors, execute an algorithm to filter data, etc. When a publisher is created, it registers as publisher using a unique identifier. When data ready, it notifies the broker.

  • Subscribers. They are dedicated to consume data. When a subscriber is created, it subscribes to the topic that it is interested in using their unique identifier. It can subscribe to more than one topic. When a publisher publishes the data, the broker notifies all the subscribers that have declared interest in it. Subscribers can also play the role of publisher.

In this paper we report the implementation in FreeRTOS. However, the design can be also implemented in other real-time operating systems. FreeRTOS is based on co-routines to emulate a multiprocessor system. Essentially, a co-routine can have multiple entry points and maintains its state between activations. In FreeRTOS, each process executes a task that does not return during the execution of the program. To switch to other tasks, an operating system method must be invoked such as vTaskDelayUntil, vTaskDelay, etc. Since we are interested in providing a system for time-critical applications, each task must complete a step of computation in bounded time.

5.1 Broker Interface

The real-time publish-subscribe system interface is presented in Interface 1 which consists of three methods

  • registerPublisher. Register a publisher into the real-time publish-subscribe system using a unique id. First, the publisher task instantiates a FreeRTOS queue using the operating system method and passes the handler to the system as well as the size of the tuple. The Broker inserts these values into a list of publishers. The function returns the handler (index) of the publisher which is later used to publish new data.

  • subscribe. Register a subscriber to a given publisher. Before the subscriber calls , it creates a task and a queue that stores the publisher produced tuple. In FreeRTOS, and are used to create the task and the queue, respectively. These values are passed as parameters to the function and the Broker inserts these two values into a publisher queue.

  • publish. Publishers use the function to publish new data. To avoid blocking, the broker notifies the main broker task. When the broker receives the notification, it reads the data from the publisher using the queue handler and then copy it to all the subscribers using their queue handlers.

1:  uint8_t  registerPublisher(uint8_t  id, uint8_t  size, QueueHandle_t queueHandler)
2:  bool subscribe(uint8_t  id, uint32_t taskHandler, QueueHandle_t queueHandler)
3:  uint8_t  publish(uint8_t  index)
Interface 1 Publish-Subscribe Interface

6 Kalman Filter

In this section we present the implementation of the Kalman filter [7]

. The Kalman filter estimates the state of the system in real-time from noisy sensory information, assuming that the noise follows a Gaussian distribution. The Kalman filter is critical for the correct functionality of our testbed. Sensors suffer from errors occurring due to many reasons. For instance, GPS sensors might suffer deviation while the device does not have a direct sight-of-view to the satellite. Furthermore, odometric values must be accurate to determine the actual position of the vehicle.

6.1 Kalman Filter Design

Let be the rate at which the Kalman filter runs and be the robot’s state at time where and are the absolute position, and are the speed and the orientation of the robot. Since the current state only depends on the previous state, we omit . Let be the covariance matrix of at time . represents the uncertainties in the current state. Let be the motor thrust and be the steering angle. Based on the dynamic model, we can compute the new state as follows:

It can be simple written as . Let be the Jacobian matrix of and be the covariance matrix representing the process noise. The estimation of covariance matrix at time is defined as .

Let be the sensor readings at time . Let be the covariance matrix measuring the noise. We use the following equations to deduce .

Let be the system state matrix. We can summarize the Kalman filter as:

Model forecast

(1)

Step correction

(2)
1:   Signature:
2:    ext Id, mbrInP, OM,
3:    ext Id, mbrInW, OM,
4:    ext Id, mbrInV, OM,
5:    ext Id, information,OM, state
6:    ext
7:    ext
8:    ext
9:  State:
10:    int
11:    int
12:    int
13:    analog initially
14:    analog initially
15:    discrete , is the the set of member which can be list of nodes in Progress, Wait or Vote
16:    discrete , Operation Mode initially
17:    discrete , state of the nodes
18:    discrete , is a list of the member in Progress state
19:    discrete , is a list of the member in Wait state
20:    discrete , is a list of the member in Vote state
21:    discrete , is a list to keep track of message lost for each process
22:  Actions:
23:  external Id, mbrInP, OM,
24:    precondition:
25:    
26:    effect:
27:    
28:  external Id, mbrInW, OM,
29:    precondition:
30:    
31:    effect:
32:    
33:  external Id, mbrInV, OM,
34:    precondition:
35:    
36:    effect:
37:    
38:  external Id, information,OM, state
39:    precondition:
40:    
41:    effect:
42:     if
43:      
44:     if
45:      
46:     if
47:      
48:  external
49:    precondition:
50:    
51:    effect:
52:     
53:  external
54:    precondition:
55:    
56:    effect:
57:    
58:  external
59:    precondition:
60:    
61:    effect:
62:     
63:  internal
64:    precondition:
65:    
66:    effect:
67:     
68:     
69:  internal
70:    precondition:
71:    
72:    effect:
73:     
74:     
75:  internal
76:    precondition:
77:    
78:    effect:
79:     
80:     
81:     
TIOA 2 Synchronization protocol

6.2 Implementation

We implement the described Kalman filter using the real-time publish-subscribe system. Thus, we create dedicated tasks for each sensor. Namely, IMU that consists of reading the gyroscope/accelerometer data, optical flow that reads the offset between two consecutive images and ultra-wideband that reads the ranging between the sensor and the fix stations. We register these tasks as publishers and publish the data at the updated refresh time. The core of the Kalman filter is implemented using two tasks:

  1. The state task. The state subscribes to IMU, optical flow and ultra-wideband. When a publisher publishes new data, the state updates and sets the uncertainties of the sensor in , accordingly.

    1. MPU 9250: The MPU returns the linear acceleration as a unitary vector. To convert to

      , we multiply by . The MPU also returns the angular speed in degree per seconds. Therefore, we simple convert to radians.

    2. Flow Breakout (Optical flow): It returns the difference of pixels in and from two consecutive images.

      Figure 4: Flow Breakout data with orientation .

      As depicted in the Figure 4, the Flow Breakout sensor measures the lateral velocity of the sensor in both directions, that is and . The effective viewing angle of the sensor is assumed to be equal to 25. Since there will be a slight orientation in the sensor’s axis we are adding an angle to be the angle of orientation. Let be the height of the sensor from the ground. Observe that and can be calculated using as we already know the height of the sensor. That is, . The Flow Breakout sensor measures the lateral velocity in pixels per second. The number of pixels in the and axis of the image frame is each. Therefore, the normalized speed is .

    3. DWM1000 (Ultra-wide band): The sensor returns the distance to a fixed number of anchors. An anchor is a station at a well known position. We compute the position using simple trigonometry.

      We consider four anchors with a layout depicted in Figure 5 where anchor are located at , , and , respectively. Let be the robot position and let denote the plane determined by the three points .

      Consider any two different anchors , the line segment and its perpendicular line segment such that is on . Observe that the perpendicular line has an angle of and . Let , , , and where and are returned by DWM1000. From the Pythagorean theorem, and . Therefore,

      Figure 5: Upper: Layout, Lower: Plane .

      Observe that is the intersection of the perpendicular lines of , in the plane , is the intersection of the perpendicular lines of , in the plane and is the intersection of the perpendicular lines of , in the plane .

      Let be the equation of the perpendicular line passing through . Since the angle is , ; see Figure 5. Similarly, let be the equation of the perpendicular passing through . Therefore, . Similarly we can compute for the other perpendicular; see Figure 5.

      We can now compute the position using the following equations:

  2. The Kalman task executes at a fixed rate. First it updates the state using equation 1 and 2 and then it sets high uncertainties in for the new calculations. Recall that in the Kalman filter the weight of each sensor information depends on the uncertainties. Once a new sensory reading is completed the uncertainties matrix is adjusted. Observe that if one step of the Kalman filter is executing without reading the sensory data, the state will be determined by the model. This approach allows the Kalman filter to handle sensors reading at different rates and reduces the maintenance time.

We show a sketch of the code that receives the data to populate the matrices and .

void stateTask(void* args)
  while (true)
    if (xTaskNotifyWait(0xffffffff, 0xffffffff, &publisherId,
             portMAX_DELAY) == pdTRUE)
    switch (publisherId)
     case MPU_DATA:
       xQueueReceive(mpuQueue, &gyroAcc,
          (TickType_t )0);   // gyroAcc contains the data
                                         of the IMU
    case FLOW_DATA:
      xQueueReceive(flowQueue, &flowData,
           (TickType_t )0);   // flowData contains the data
                                         of the flowdeck
    case DWM_DATA:
      xQueueReceive(dwmQueue, &dwmData,
            (TickType_t )0);  // dwmData contains the data
                                         of the DWM100

The task of the Kalman Filter is executed at the frequency of . It executes one step of the Kalman filter, update the variables for the next step and publish the new state. After publishing the data, the matrices and are reset to high uncertainties.

void kalmanTask(void* args)
     while (true)
       vTaskDelayUntil(&lastWakeTime,
                     FREQUENCY_KF / portTICK_RATE_MS);
       if  (ekf_step(&ekf))  // Executes the Kalman Filter
           xQueueOverwrite(stateQueue, &state);
                                  // insert in the queue
           publish(stateId);  // publish
       // reset the Matrices Z and R

7 Membership and Synchronizer Service

In this section, we describe the protocol which allows the synchronization of Timed Input/Output Automata. We present the Algorithm as a Timed Input/Output Automata in Protocol 2. Let denote the set of robots. The protocol relies on three states: where robots are performing a maneuver, where robots have completed the maneuver and where robots are voting to start the next instruction. Let , initially set to , be the set of boolean values that indicates when a robot has completed the maneuvers, and let denote the number of rounds in voting of robot

For simplicity of presentation, we assume that every robot is aware of the rest of the robots. Later we explain how to remove this assumption. Every robot broadcasts its id, state of the automata and the number of rounds it has been in the state. While robots are in the state , they are performing a maneuver. When a robot completes it maneuver, it changes its state to and waits for the other robots to complete their maneuver. When a robot receives the state from robot , it sets a . Robot changes its state from to when or if it receives the state from a robot in which case it sets . Robot increases and when , it changes to , sets , and start a new maneuver.

In the following theorem we show that if the number of consecutive rounds with message lost in the system is less than , all robots start a new maneuver at the same time. We assume that all the maneuver take a bounded number of rounds and at least rounds.

Theorem 7.1

If the number of consecutive rounds with message lost in the system is less than , Algorithm 2 satisfies the following properties:
1) Safety: If a robot is in state, then eventually all robots reach state. Further, if a robot changes its state to at round , all other robots change the state to at round , and
2) Liveness: Each robot executes infinitely often in state.

Proof

We first prove that property (1) holds under the assumptions. Consider any robot that reaches the state. Observe that, remains in until it receives that all other robots are in the state since or it receives from a robot. Therefore, the earliest time that a robot can change its state to is when all robots are in for at least one round. Let be the round where the last robot changes its state to . Observe that there is a round in the interval where all robots receive the message with the state , since the number of consecutive rounds with message lost in the system is less than . Consider the first robot that listens the from all other robots in round . Therefore, changes its state to and broadcasts in each round . Observe that changes its state to when . When any other robot, say , receives the message from at any round during the interval it changes its state to and set . Therefore, every other robot changes its state to when .

It is not difficult to see that Property (2) holds under the assumptions that the number of consecutive rounds with message loss is less than .

Observe that from Theorem 7.1, robots reach a consensus to start a new maneuver at the same time. Therefore, it provides a synchronous robot system. Although one can expect that the number of consecutive rounds with message loss is less than , in practice we can have communication faults, i.e., the number of consecutive rounds with message loss is at least . Therefore, properties (1) and (2) may not hold. Indeed, it is known that the consensus problem cannot be solved deterministically if messages can be lost. To alleviate the problem, we consider two operation modes. Namely mode where every robot receives the message from every other robot and mode where at least one robot does not receive the message from other robot. Observe that if there is a partition among the robots such that some robots are in the mode and others are in mode, a cooperative robotic application may not be completed successfully or robots may collide. Therefore, the goal is to let robots agree on the operation mode. However, it is again a consensus problem and consequently there is no deterministic algorithm that can solve it. However, we can minimize the number of rounds where robots do not agree using the protocol proposed in [8]. In this protocol, when a robot does not receive a message at round from another robot or when it receives operation mode from any other robot, then it changes its operation mode to . The robots return to operation mode when they receive the operation mode from every other robot.

Lemma 1

Stream Consensus [8] Robots may disagree in at most two continuous rounds.

Consider the case where robots are in or state. Observe that if the robots change to operation mode due to message loss, then it is safe to start the next maneuver when they change to operation mode. From Lemma 1, robots may start within two rounds of difference. However, when robots are in the state, they always send as the operation mode and do not execute the Stream Consensus protocol. The following theorem summarizes the the previous discussion and is presented without a proof.

Theorem 7.2

Protocol 2 allows the robots to start a new maneuver within at most two rounds of difference. Further, all robots have common knowledge before starting a new maneuver.

Now, we show how to remove the assumption that robots need to know the number of active robots. Indeed, during the state, robots implement a membership service. More precisely, robots maintain the set of participating robots that they receive messages from. Further, robots maintain the set of robots that are in the state. Robots broadcast while they are in the state. A robot decides to change its state to when it receives from every other robot.

8 Virtual Machine

In this section we present the implementation of the programming interface that is used to implement robotic applications in the framework which consists of three callback functions and one method to send messages; see Interface 3.

1:  void *init()
2:  uint8_t  (*newRound)(CarCommand *command, LDMap *ldmap, uint8_t  globalState, uint32_t clock)
3:  uint8_t  (*endOfRound)(LDMap *ldmap, uint8_t  globalState, uint32_t clock)
4:  void sendMessageData(uint8_t  *payload)
Interface 3 Programming Interface
  • init. It is called when the system starts. It is used to initialize the local state of the robot.

  • newRound. It is called at the beginning of each application round. In other words, when the robots change their state to . This function allows setting the robot’s commands. Application rounds are suggested to be of 1 or 2 seconds to let the robot make progress. The function receives the local dynamic map that contains the state of all the participant robots at the end of previous round. If the operation mode in the local dynamic map is , then all robots agree in the content of the local dynamic map. It also receives the globalState and the local clock.

  • endOfRound. The function receives the local dynamic map that contains the state of all the participant robots in the end of the round, i.e., when the robot complete the maneouver. It also receives the globalState, and the local clock. The TIOA is responsible for performing the transitions of globalState accordingly.

  • sendMessageData. It allows to broadcast up to 9 bytes in each round.

We enhance SyROF  by implementing a set of local and distributed queries to simplify the design of cooperative robotic applications:

  • NumberOfMembers. Returns the number of active members performing the current primitive.

  • Leader. Returns the leader defined as the robot with minimum id.

  • ShortestEnclosingCircle. Returns the center and the radius of the shortest circle that contains all the robots.

  • ConvexHull. Returns the list of active robots in the convex hull.

  • MinWeightedMatching. Given a set of points, it returns the matching point for each robot that minimizes the maximum sum of distances from each point to its destination.

9 Conclusion

In this paper, we present the current work on SyROF, a testbed that allows undergraduate students to implement and demo collaborative robotic applications. SyROF  provides a synchronous robotic system to implement cooperative applications. We expect that the testbed lowers the barriers for the students of computer science and computer engineering at CSULB. We plan to perform extensive experiments with different robots. The implementation has demonstrated that SyROF is feasible.

References

  • [1] Haris Balta, Janusz Bedkowski, Shashank Govindaraj, Karol Majek, Pawel Musialik, Daniel Serrano, Kostas Alexis, Roland Siegwart, and Geert De Cubber. Integrated data management for a fleet of search-and-rescue robots. Journal of Field Robotics, 34(3):539–582, 2017.
  • [2] Shih-Yi Chien, Michael Lewis, Siddharth Mehrotra, Nathan Brooks, and Katia Sycara. Scheduling operator attention for multi-robot control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 473–479. IEEE, 2012.
  • [3] Reuven Cohen and David Peleg. Local spreading algorithms for autonomous robot systems. Theoretical Computer Science, 399(1-2):71–82, 2008.
  • [4] Duane T Davis, Timothy H Chung, Michael R Clement, and Michael A Day. Consensus-based data sharing for large-scale aerial swarm coordination in lossy communications environments. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3801–3808. IEEE, 2016.
  • [5] Barzin Doroodgar, Yugang Liu, and Goldie Nejat. A learning-based semi-autonomous controller for robotic exploration of unknown disaster scenes while searching for victims. IEEE Trans. Cybernetics, 44(12):2719–2732, 2014.
  • [6] Paola Flocchini, Giuseppe Prencipe, Nicola Santoro, and Peter Widmayer. Arbitrary pattern formation by asynchronous, anonymous, oblivious robots. Theoretical Computer Science, 407(1-3):412–447, 2008.
  • [7] Simon J Julier and Jeffrey K Uhlmann. New extension of the kalman filter to nonlinear systems. In Signal processing, sensor fusion, and target recognition VI, volume 3068, pages 182–194. International Society for Optics and Photonics, 1997.
  • [8] Oscar Morales-Ponce, Elad M Schiller, and Paolo Falcone. How to stop disagreeing and start cooperatingin the presence of asymmetric packet loss. Sensors (Basel, Switzerland), 18(4), 2018.
  • [9] Daniel Pickem, Myron Lee, and Magnus Egerstedt. The gritsbot in its natural habitat-a multi-robot testbed. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 4062–4067. IEEE, 2015.
  • [10] Carlo Pinciroli, Adam Lee-Brown, and Giovanni Beltrame. A tuple space for data sharing in robot swarms. In Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS), pages 287–294. ICST (Institute for Computer Sciences, Social-Informatics and ?, 2016.
  • [11] Morgan Quigley, Brian Gerkey, Ken Conley, Josh Faust, Tully Foote, Jeremy Leibs, Eric Berger, Rob Wheeler, and Andrew Ng.

    Ros: an open-source robot operating system.

    ICRA workshop on open source software, 2009.
  • [12] Ariel Rosenfeld, Noa Agmon, Oleg Maksimov, and Sarit Kraus. Intelligent agent supporting human–multi-robot team collaboration. Artificial Intelligence, 252:211–231, 2017.
  • [13] Michael Rubenstein, Christian Ahler, and Radhika Nagpal. Kilobot: A low cost scalable robot system for collective behaviors. In 2012 IEEE International Conference on Robotics and Automation, pages 3293–3298. IEEE, 2012.
  • [14] Ichiro Suzuki. Formation and agreement problems for anonymous mobile robots. In Proceedings of the 31th Annual Allerton Conference on Communication, Control, and Computing,, University of Illinois, Urbana, Illinois, October 1993, 1993.
  • [15] Vivek Shankar Varadharajan, David St Onge, Christian Guß, and Giovanni Beltrame. Over-the-air updates for robotic swarms. IEEE software, 35(2):44–50, 2018.
  • [16] Xu Wei, Duan Fengyang, Zhang Qingjie, Zhu Bing, and Sun Hongchang. A new fast consensus algorithm applied in rendezvous of multi-uav. In The 27th Chinese Control and Decision Conference (2015 CCDC), pages 55–60. IEEE, 2015.
  • [17] Yukiko Yamauchi, Taichi Uehara, Shuji Kijima, and Masafumi Yamashita. Plane formation by synchronous mobile robots in the three-dimensional euclidean space. Journal of the ACM (JACM), 64(3):16, 2017.
  • [18] Yukiko Yamauchi and Masafumi Yamashita. Pattern formation by mobile robots with limited visibility. In International Colloquium on Structural Information and Communication Complexity, pages 201–212. Springer, 2013.