Adaptive Leader-Follower Formation Control and Obstacle Avoidance via Deep Reinforcement Learning

11/15/2019 ∙ by Yanlin Zhou, et al. ∙ University of Florida IEEE 0

We propose a deep reinforcement learning (DRL) methodology for the tracking, obstacle avoidance, and formation control of nonholonomic robots. By separating vision-based control into a perception module and a controller module, we can train a DRL agent without sophisticated physics or 3D modeling. In addition, the modular framework averts daunting retrains of an image-to-action end-to-end neural network, and provides flexibility in transferring the controller to different robots. First, we train a convolutional neural network (CNN) to accurately localize in an indoor setting with dynamic foreground/background. Then, we design a new DRL algorithm named Momentum Policy Gradient (MPG) for continuous control tasks and prove its convergence. We also show that MPG is robust at tracking varying leader movements and can naturally be extended to problems of formation control. Leveraging reward shaping, features such as collision and obstacle avoidance can be easily integrated into a DRL controller.



There are no comments yet.


page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The traditional control problem of dynamical systems with nonholonomic constraints is a heavily researched area because of its challenging theoretical nature and its practical use. A wheeled mobile robot (WMR) is a typical example of a nonholonomic system. Researchers in the control community have targeted problems in WMR including setpoint regulation, tracking [7], and formation control [31]. Due to the nature of these problems, the control law design involves sophisticated mathematical derivations and assumptions [13].

One of these problems is image-based localization, which involves an autonomous WMR trying to locate its camera pose with respect to the world frame [30]. It is an important problem in robotics since many other tasks including navigation, SLAM, and obstacle avoidance require accurate knowledge of a WMR’s pose [30]. PoseNet adopts a convolutional neural network (CNN) for indoor and outdoor localization [19]

. Using end-to-end camera pose estimation, the authors sidestep the need for feature engineering. This method was later extended by introducing uncertainty modeling for factors such as noisy environments, motion blur, and silhouette lighting

[18]. Recently, there is an emerging trend of localization in dynamic indoor environments [22, 24].

Given accurate localization methods, various vision-based control tasks such as leader following can be accomplished. The leader-following problem is defined as an autonomous vehicle trying to follow the movement of a leader object [27]. The form of a leader is not limited to an actual robot, but can also include virtual markers [29], which can serve as a new method for controlling real robots.

However, a virtual leader is able to pass through a territory that a real follower cannot, which require the follower to perform obstacle avoidance [5]. The major challenge of applying classical control methods to the obstacle avoidance problem is having controllers correctly respond to different obstacles or unreachable areas. [6, 9]. A common approach is to design an adaptive or hybrid controller by considering all the cases, which is time-consuming [5].

Formation control is a leader-follower problem but with multiple leader/followers. It includes the additional challenge of collision avoidance among group members [16]. Two major approaches include graph-based solutions with interactive topology and optimization-based solutions[12]. However, the nonlinearity of nonholonomic robots adds to the challenges of modeling robots and multi-agent consensus. Deep RL solves these problems, but training an agent requires complex environmental simulation and physics modeling of robots [2, 25].

Fig. 1: A screenshot from our custom dataset.

In this paper, modular localization circumvents the challenge of not having an actual WMR’s physics model or time-consuming environmental simulation. While end-to-end training from images to control has become popular [20, 4], partitioning the problem into two steps—localization and control—enables the flexibility of retraining the controller while reusing the localization module. The flowchart of our modular design is shown in Figure 2. The first phase of this work is to apply a CNN model called residual networks (ResNets) [14] to vision-based localization. We focus on indoor localization which includes challenges such as dynamic foreground, dynamic background, motion blur, and various lighting. The dynamic foreground and background are realized by giving intermittent views of landmarks. A picture of our training and testing environment is shown in Figure 1. We show that our model accurately predicts the position of robots, which enables DRL without need for a detailed 3D simulation.

To replace traditional controllers that usually involve complex mathematical modeling and control law design, we leverage DRL for several control problems including leader tracking, formation control, and obstacle avoidance. We propose a new actor-critic algorithm called Momentum Policy Gradient (MPG), an improved framework of TD3 that reduces under/overestimation [11]. We theoretically and experimentally prove MPG’s convergence and stability. Furthermore, the proposed algorithm is efficient at solving leader-following problems with regular and irregular leader trajectories. MPG can be extended to solve the collision avoidance and formation control problems by simply modifying the reward function.





robot pose

Supervised learning

Reinforcement learning
Fig. 2: Information flow between modules. Blocks are neural networks.

In summary, our contribution is four-fold.

  • We propose Momentum Policy Gradient for continuous control tasks that combats under/overestimation.

  • A modular approach that circumvents the need for complex modelling work or control law design.

  • Robust image-based localization achieved using CNNs.

  • Natural extensions from the single leader-follower tracking problem to collision/obstacle avoidance and formation control with reward shaping.

Ii Mobility of Wheeled Robots

We consider the problem of forward kinematics for WMRs that has been extensively studied over the decades. A WMR collects two inputs: angular and linear velocities. The velocity inputs are then fed into on-board encoders to generate torque for each wheel. Due to the different sizes of robots, number of wheels, and moving mechanisms, robots can be classified into holonomic agents (

i.e., omni-directional Mecanum wheel robots) [17] and nonholonomic agents (i.e., real vehicles with constrained moving direction and speed) [3]. In this paper, we consider nonholonomic agents.

Ideally, angular and linear velocities are the action components of a DRL control agent. However, these two action spaces have very different scales and usually cause size-asymmetric competition [34]

. We found out that training a DRL agent with angular and linear velocities as actions converges slower than our methods presented below. Since there is no loss in degree of freedom, it is sufficient to use scalar velocities in

and axes similar to the work done in [31].

Let be the Cartesian position, as orientation, and denote linear and angular velocities of the WMR agent . The dynamics of each agent is then


The nonholonomic simplified kinematic Equation (2) can then be derived by linearizing (1) with respect to a fixed reference point distance off the center of the wheel axis of the robot, where .

Following the original setting meters in [31], it is trivial to transfer velocity signals as actions used in nonholonomic system control.


where and are input control signals to each robot .

In addition, other differential drive methods such as Instantaneous Center of Curvature (ICC) can be used for nonholonomic robots. However, compared to (2), ICC requires more physical details such as distance between the centers of the two wheels and eventually only works for two-wheeled robots [8]. Meanwhile, decomposing velocity dynamics to velocities on x and y axes can be reasonably applied to any WMRs.

Iii Localization

The localization module focuses on the problem of estimating position directly from images in a noisy environment with both dynamic background and foreground. Several landmarks (i.e., books, other robots) were placed so as to be visible in the foreground. Using landmarks as reference for indoor localization tasks has proven to be successful for learning-based methods [21]. As the WMR moves around the environment, a camera captures only intermittent view of landmarks and their position as the frame changes.

Overall, data was gathered from types of robot trajectories (i.e., regular, random, whirlpool) with multiple trials taking place at different times of day when the background changes greatly and lighting conditions vary from morning, afternoon, and evening. The image dataset and ground truth pose of the agent was collected using a HD camera and a motion capture system, respectively. As the WMR moved along a closed path, images were sampled at rate of Hz, and the ground truth pose of the camera and agent at a rate of Hz. Data collection was performed at Nonlinear Controls and Robotics Lab. The camera used for recording images is placed on a TurtleBot at a fixed viewing angle. An example from our dataset is displayed in Figure 1.

Fig. 3: Example snail trajectory with predicted pose.

A ResNet-50 model [14]

with a resized output layer is trained from scratch on the dataset. Residual networks contain connections between layers of different depth to improve backpropagation. This eliminates the vanishing gradient problems encountered when training very deep CNNs. The ResNet predicts the robots current 2D position, orientation, and the distance to nearby landmarks. Distance to landmarks was not used in later modules, but helps focus attention onto the landmarks, which is a more reliable indicator of current pose, instead of any background changes between images.

We claim that our approach is robust enough to accurately predict a robot’s pose for even very complex trajectories. These paths can have multiple points of self-intersection. Furthermore, ResNet based localization works well even with a dynamic foreground (multiple landmarks) and a dynamic background. Different lighting conditions and changes in background objects between trials do not affect the accuracy. The error rates are given in Table I for the robot’s 2D coordinates and 4D quaternion orientation. They are all less than . Figure 3 shows an example predicted and motion captured poses as well as landmarks; there is almost no difference between the two.

x (%) y (%) q1 (%) q2 (%) q3 (%) q4 (%)
TABLE I: ResNet52 Pose Predition Error

Iv Momentum Policy Gradient

1:Initialize critic networks , , and actor network with random parameters
2:Initialize target networks with
3:Create empty experience replay buffer
4:for  to  do
5:     Take action
7:     Get next state , reward from environment
8:     Push into
9:     Sample mini-batch of transitions from
10:     Set
11:     for  to  do
18:         Minimize
19:         if  then
20:               average of
21:              Minimize
24:         end if
25:     end for
26:end for
Algorithm 1 Momentum Policy Gradient

Since nonholonomic controllers have a continuous action space, we design our algorithm based on the framework established by DDPG [23]. There are two neural networks: a policy network predicts the action given the state , a Q-network estimates the expected cumulative reward for each state-action pair.


The Q-network is part of the loss function for the policy network. For this reason, the policy network is called the actor and the Q-network is called the critic.


The critic itself is trained using a Bellman equation derived loss function.


However, this type of loss leads to overestimation of the true total return [10]. TD3 fixes this by using two Q-value estimators and taking the lesser of the two [11].


Note that this is equivalent to taking the maximum, and then subtracting by the absolute difference. However, always choosing the lower value brings underestimation and higher variance


To lower the variance in the estimate, inspired by the momentum used for optimization in [36], we propose Momentum Policy Gradient illustrated in Algorithm 1 which averages the current difference with the previous difference .


This combats overestimation bias more aggressively than just taking the minimum of . Moreover, this counters any over-tendency TD3 might have towards underestimation. Because neural networks are randomly initialized, by pure chance could be large. However, it is unlikely that and are both large as they are computed using different batches of data. Thus has a lower variance than .

In the case of negative rewards, the minimum takes the larger in magnitude. This will actually encourage overestimation (here the estimates trend toward ).

Before proving the convergence of our algorithm, we first require a lemma proved in [33].

Lemma 1

Consider a stochastic process where such that

for all . Let be a sequence of increasing -algebras such that are -measurable. If

  1. the set is finite

  2. , , but

    with probability 1

  3. where and converges to 0 with probability 1

  4. for some constant ,

Then converges to 0 with probability 1.

The theorem and proof of MPG’s convergence is borrowed heavily from those for Clipped Double Q-learning [11].

Theorem 1 (Convergence of MPG update rule)

Consider a finite MDP with and suppose

  1. each state-action pair is sampled an infinite number of times

  2. Q-values are stored in a lookup table

  3. receive an infinite number of updates

  4. the learning rates satisfy , , but with probability 1

  5. for all state-action pairs.

Then Momentum converges to the optimal value function .

Let , , and . Conditions 1 and 2 of Lemma 1 are satisfied. By the definition of Momentum Policy Gradient,

where . Then


We have split into two parts: a term from standard Q-learning, and times another expression.

As it is well known that , condition (3) of Lemma 1 holds if we can show converges to 0 with probability 1. Let . If with probability 1, then

so . Therefore showing proves that converges to 0.

This clearly converges to 0. Hence converges to as converges to 0 with probability by Lemma 1. The convergence of follows from a similar argument, with the roles of and reversed.

V Continuous Control

(a) Test on random
(b) Test on square
(c) Test on circle
Fig. 4: Performance of a follower trained at 100 episodes with random leader on (a) random trajectory, (b) square leader, (c) circle leader.

We demonstrate MPG on a variety of leader-follower continuous control tasks. In all simulations, neural networks have two hidden layers of and units respectively. Output and input sizes vary depending on the environment and purpose of the model. Constraints on the motion of robots (Table II) are enforced by ending episodes once they are breached. A penalty of

is also added to the reward for that time step. The hyperparameters of MPG are given in Table 


TABLE II: Kinematic Constraints of Agents
Hyper-parameter Symbol Value
Actor Learning Rate
Critic Learning Rate
Batch Size
Discount factor
Number of steps in each episode
Training noise variance
Initial exploration noise variance
Minimum exploration noise variance
Exploration noise variance decay rate
TABLE III: MPG Hyperparameters

Suppose there are agents whose kinematics are governed by Equations 1 and 2. Let denote the state of agent and the desired formation control for agents follow the constraint [28]:

Definition 1

From (9), we consider displacement-based formation control with the updated constraint given as:


Each agent measures the position of other agents with respect to a global coordinate system. However, absolute state measurements with respect to the global coordinate system is not needed. A general assumption made for formation control communication is that all agent’s position, trajectory or dynamics should be partially or fully observable [35, 13].

The leader-follower tracking problem can be viewed as formation control with only agents.

V-a Tracking

We first test MPG by training a follower agent to track a leader whose position is known. The leader constantly moves without waiting the follower. The follower agent is punished based on its distance to the leader and rewarded for being very close. Let be the 2D position of the leader and be the corresponding position for the follower. The reward function for discrete leader movement is defined as


where is the L2 norm.

Then, We train a follower to track a leader moving in a circular pattern. Remarkably, this follower: can generalize to a scaled-up or scaled-down trajectory, is robust to perturbations in the initial starting location, and even tracks a leader with an elliptical motion pattern that it has never encountered before. However, the follower fails to track a square leader which is caused by under exploration of the entire environment. The under exploration issue is further highlighted when training a follower to track a leader with square motion. The model learns to go straight along an edge in less than episodes, but fails to learn to turn a angle for around episodes.

Agents trained against a random moving leader generalize well to regular patterns.


This is very similar to the discrete courier task in [26]. The random leader provides the follower with a richer set of possible movements compare to previous settings. As shown in Figure 4, this allows the follower to track even more regular trajectories. The follower is robust to changes in the initial position, as seen in Figure 4(c) where the follower starts outside the circle. Average distance between the follower and the leader and the average reward are given in Table IV.

Average Distance Average Reward
TABLE IV: Random leader, Single Follower results.
Fig. 5: MPG vs. TD3 rewards during training for the circle leader-follower task.
(a) TD3
(b) MPG
Fig. 6: Performance of TD3 and MPG followers trained with identical hyperparameters at 200 episodes.

For comparison, we also trained several followers using TD3. An example reward is displayed in Figure 5. The values are smoothed using a 1D box filter of size 200. For the circle leader task, TD3 which struggles to close the loop, slowly drifting away from the leader as time progresses. The MPG trained agent does not suffer from this problem.

We also noticed that some TD3 trained followers do not move smoothly. This is demonstrated in Figure 6. When trying to track a circle, these agents first dip down from despite the leader moving counter-clockwise, starting at .

V-B Formation Control

(a) Rigid Unison with regular leader
(b) Rigid Unison with random leader
(c) Multi-agent Consensus
Fig. 7: Formation control: (a) and (b) use the unison definition while (c) allows the followers to swap positions.

We naturally extend from the tracking problem to displacement-based formation control by adding additional trained followers to track the sole leader. However, additional work is needed to achieve collision avoidance among the followers. Based on displacement-base formation control, we design and conduct two simulations similar to [15]:

  • Unison formation control: There is a pre-defined formation with respect to global frame. All agents maintain a rigid formation throughout the entire movement process. The orientation of the formation does not change.

  • Consensus formation control: Agents are given freedom to adapt rapidly while keeping the formation. The orientation of the formation can be changed.

In all simulations, a single neural network takes as input the positions of all the agents with respect to a global coordinate system and issues movement commands to all agents in the same coordinate system.

For unison formation control, groups of agents have a rigid formation that should not rotate. Given the position of the leader, whose index is , each follower should try to minimize its distance to an intended relative location with respect to the leader while avoiding collisions. As the leader moves, the expected positions move in unison. Let be the minimum safety distance, above which collision can be avoided. The reward is


where is collision coefficient and is the number of collisions and is the position of agent . Upon any collisions, we reset the environment and start a new episode.

Then we explore unison formation control for a square formation with curved and random leader movements. As seen in Figure 7, three followers move in unison equally well when tracking (a) a leader with smooth trajectory starting from lower left corner and (b) a random leader. During training, we observed that adding more agents results in longer training time. This is because adding an agent increases the state space and action space, and thus our input and output dimensions, by 2. Training time grows approximately linearly in the number of agents. This makes it intractable to train and deploy large formations in the hundreds or thousands of robots. The average reward and distances are reported in Table V.

Pattern Reward Dist. to Dist. to Dist. to
TABLE V: Unison average reward and distances
TABLE VI: Consensus average reward and distances

Unlike unison formation control which dictates the individual motion of all agents, multi-agent consensus keeps a formation with respect to the local frame. The formation can rotate with respect to the global frame. The problem definition allows for switching and expansion within a given topology, as long as there are no collisions. This can be beneficial. For example, the agents may need to move further apart to avoid an obstacle or tighten their formation to fit through some passageway. In general, agents should maintain a constant distance to each other. Letting , the reward function is given according to the following equation.


In our simulations, we trained follower agents to maintain triangle formation with a leader undergoing random motion. As shown in Figure 7 (c), we test the performance of the multi-agent consensus while having the leader traverse a counter-clockwise circular trajectory. The leader starts at , follower starts at a random lower left position, and follower starts at a random upper right position. Initially, the three agents are very far away from each other but quickly formed a triangle when the leader reaches around . We observe that the followers swapped their local positions within the formation when the leader arrives . This is because the reward function is only interested in their relative distances, and the agents can maintain the formation with less total movement by switching their relative positions in the group. Results are reported in Table VI.

Fig. 8: MPG vs. TD3 rewards during training for multi-agent consensus.

For the purpose of comparison, an agent was also trained using TD3. The rewards per time step are shown in Figure 8. These were collected over 5 training runs of 200 episodes each. The MPG curves are longer than the TD3 curves, because the MPG networks avoid episode ending collisions for longer. Hence, MPG trained agents achieve the desired behavior sooner than the TD3 agents.

V-C Obstacle avoidance

Based on the collision penalty embedded in Equation (14) of formation control, fixed or moving obstacle avoidance can naturally be integrated into the leader following or formation control problems. Instead of control law redesign, obstacle avoidance can easily be achieved by adding an additional term in the reward function.

Fig. 9: Follower avoiding the moving and fixed obstacles.

The simulation setup is illustrated in Figure 9. The leader linearly travels from to . We have fixed obstacles on the path of the leader that only stop the follower, and a moving obstacle travelling linearly from to . The follower, starting from a random location, must track the leader without hitting any obstacles. The reward function is


where is the position of the obstacles and is the relative importance of avoiding the obstacles.

We use this reward, instead of a single penalty for colliding with an obstacle, because Equation (15) gives constant feedback. Training with sparse rewards is a big challenge in DRL [32]. In particular, because the obstacle is encountered later on, the follower learns to strictly copy the leader’s movements without regard for the obstacles. This is a local optimum that the agent fails to escape. But for our purpose, it is not so important that the agent learns with even poorly shaped rewards. The realism of the training settings is irrelevant as the agent is already in a artificial and simplified environment.

Vi Conclusion and Future Work

In this paper, we propose a new DRL algorithm Momentum Policy Gradient to solve leader-follower tracking, formation control, and obstacle/collision avoidance problems, which are difficult to solve using traditional control methods. These controllers can be trained in a simple toy environment, and then plugged into a larger modular framework. The results show that MPG performs well in training agents to tackle a variety of continuous control tasks. Image-based localization is achieved with a ResNet trained on a custom dataset. Analysis demonstrates that the model can reliably predict a WMR’s pose even when there are dynamic foreground/background, lighting, and view.

These methods are computationally inexpensive. On a M40 Nvidia GPU and Intel Xeon E5 processor, MPG only takes, at most, hours to fully converge. For localization, the CNN model took about hours to finish training on a dataset of over GB of images. This is because the ResNet-50 model has residual blocks, compared to the layers in our DRL agents.

In the future, we would like to apply MPG to a wider variety of robotics problems. In particular, we believe our current framework can generalize to larger indoor and outdoor settings. Some work has already been done in this field [2, 1], but there are still major challenges that have not been solved. One of these is data collection. Currently, we require many samples to train our localization module. However, using the techniques of few/one-shot learning [1] and data augmentation [2], it will no longer be necessary to collect so many samples at different times of day. This opens up the possibility of online learning as a robot explores a new area of the environment.


  • [1] J. Bruce, N. Sünderhauf, P. Mirowski, R. Hadsell, and M. Milford (2017) One-shot reinforcement learning for robot navigation with interactive replay. arXiv preprint arXiv:1711.10137. Cited by: §VI.
  • [2] J. Bruce, N. Sünderhauf, P. Mirowski, R. Hadsell, and M. Milford (2018) Learning deployable navigation policies at kilometer scale from a single traversal. arXiv preprint arXiv:1807.05211. Cited by: §I, §VI.
  • [3] R. L. Bryant (2006) Geometry of manifolds with special holonomy. 150 Years of Mathematics at Washington University in St. Louis: Sesquicentennial of Mathematics at Washington University, October 3-5, 2003, Washington University, St. Louis, Missouri 395, pp. 29. Cited by: §II.
  • [4] H. L. Chiang, A. Faust, M. Fiser, and A. Francis (2018) Learning navigation behaviors end to end. CoRR abs/1809.10124. External Links: Link, 1809.10124 Cited by: §I.
  • [5] J. Chin and H. Tsai (1993) A path algorithm for robotic machining. Robotics and computer-integrated manufacturing 10 (3), pp. 185–198. Cited by: §I.
  • [6] W. E. Dixon, M. S. de Queiroz, D. M. Dawson, and T. J. Flynn (2004) Adaptive tracking and regulation of a wheeled mobile robot with controller/update law modularity. IEEE Transactions on control systems technology 12 (1), pp. 138–147. Cited by: §I.
  • [7] W. Dixon, D. Dawson, E. Zergeroglu, and F. Zhang (2000) Robust tracking and regulation control for mobile robots. International Journal of Robust and Nonlinear Control: IFAC-Affiliated Journal 10 (4), pp. 199–216. Cited by: §I.
  • [8] G. Dudek and M. Jenkin (2010) Computational principles of mobile robotics. Cambridge university press. Cited by: §II.
  • [9] M. S. Fibla, U. Bernardet, and P. F. Verschure (2010) Allostatic control for robot behaviour regulation: an extension to path planning. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1935–1942. Cited by: §I.
  • [10] V. François-Lavet, P. Henderson, R. Islam, M. G. Bellemare, J. Pineau, et al. (2018) An introduction to deep reinforcement learning.

    Foundations and Trends® in Machine Learning

    11 (3-4), pp. 219–354.
    Cited by: §IV.
  • [11] S. Fujimoto, H. van Hoof, and D. Meger (2018) Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477. Cited by: §I, §IV, §IV.
  • [12] X. Ge, Q. Han, D. Ding, X. Zhang, and B. Ning (2018) A survey on recent advances in distributed sampled-data cooperative control of multi-agent systems. Neurocomputing 275, pp. 1684–1701. Cited by: §I.
  • [13] Z. Han, K. Guo, L. Xie, and Z. Lin (2019) Integrated relative localization and leader–follower formation control. IEEE Transactions on Automatic Control 64 (1), pp. 20–34. Cited by: §I, §V.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep residual learning for image recognition. CoRR abs/1512.03385. External Links: Link, 1512.03385 Cited by: §I, §III.
  • [15] S. He, M. Wang, S. Dai, and F. Luo (2019) Leader–follower formation control of usvs with prescribed performance and collision avoidance. IEEE Transactions on Industrial Informatics 15 (1), pp. 572–581. Cited by: §V-B.
  • [16] Y. Hong, G. Chen, and L. Bushnell (2008) Distributed observers design for leader-following control of multi-agent networks. Automatica 44 (3), pp. 846–850. Cited by: §I.
  • [17] B. E. Ilon (1975-April 8) Wheels for a course stable selfpropelling vehicle movable in any desired direction on the ground or some other base. Google Patents. Note: US Patent 3,876,255 Cited by: §II.
  • [18] A. Kendall and R. Cipolla (2016)

    Modelling uncertainty in deep learning for camera relocalization

    In 2016 IEEE international conference on Robotics and Automation (ICRA), pp. 4762–4769. Cited by: §I.
  • [19] A. Kendall, M. Grimes, and R. Cipolla (2015) Posenet: a convolutional network for real-time 6-dof camera relocalization. In

    Proceedings of the IEEE international conference on computer vision

    pp. 2938–2946. Cited by: §I.
  • [20] T. Krajník, F. Majer, L. Halodová, and T. Vintr (2018) Navigation without localisation: reliable teach and repeat based on the convergence theorem. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1657–1664. Cited by: §I.
  • [21] N. Lee, S. Ahn, and D. Han (2018) AMID: accurate magnetic indoor localization using deep learning. Sensors 18 (5), pp. 1598. Cited by: §III.
  • [22] R. Li, Q. Liu, J. Gui, D. Gu, and H. Hu (2018) Indoor relocalization in challenging environments with dual-stream convolutional neural networks. IEEE Transactions on Automation Science and Engineering 15 (2), pp. 651–662. Cited by: §I.
  • [23] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §IV.
  • [24] Y. Lin, Z. Liu, J. Huang, C. Wang, G. Du, J. Bai, S. Lian, and B. Huang (2018) Deep global-relative networks for end-to-end 6-dof visual localization and odometry. arXiv preprint arXiv:1812.07869. Cited by: §I.
  • [25] Q. Liu and Q. Hui (2018) The formation control of mobile autonomous multi-agent systems using deep reinforcement learning. 13th Annual IEEE International Systems Conference. Cited by: §I.
  • [26] P. Mirowski, M. Grimes, M. Malinowski, K. M. Hermann, K. Anderson, D. Teplyashin, K. Simonyan, A. Zisserman, R. Hadsell, et al. (2018) Learning to navigate in cities without a map. In Advances in Neural Information Processing Systems, pp. 2424–2435. Cited by: §V-A.
  • [27] F. Mutz, V. Cardoso, T. Teixeira, L. F. Jesus, M. A. Golçalves, R. Guidolini, J. Oliveira, C. Badue, and A. F. De Souza (2017) Following the leader using a tracking system based on pre-trained deep neural networks. In Neural Networks (IJCNN), 2017 International Joint Conference on, pp. 4332–4339. Cited by: §I.
  • [28] K. Oh, M. Park, and H. Ahn (2015) A survey of multi-agent formation control. Automatica 53, pp. 424–440. Cited by: §V.
  • [29] R. Olfati-Saber (2004) Flocking for multi-agent dynamic systems: algorithms and theory. Technical report CALIFORNIA INST OF TECH PASADENA CONTROL AND DYNAMICAL SYSTEMS. Cited by: §I.
  • [30] N. Piasco, D. Sidibé, C. Demonceaux, and V. Gouet-Brunet (2018) A survey on visual-based localization: on the benefit of heterogeneous data. Pattern Recognition 74, pp. 90–109. Cited by: §I.
  • [31] W. Ren (2008) Consensus tracking under directed interaction topologies: algorithms and experiments. In 2008 American Control Conference, pp. 742–747. Cited by: §I, §II, §II.
  • [32] T. Salimans and R. Chen (2018) Learning montezuma’s revenge from a single demonstration. arXiv preprint arXiv:1812.03381. Cited by: §V-C.
  • [33] S. Singh, T. Jaakkola, M. Littleman, and C. Szepesvári (2000) Convergence results for single-step on-policyreinforcement-learning algorithms. Machine Learning 38, pp. 287–308. Cited by: §IV.
  • [34] J. Weiner (1990) Asymmetric competition in plant populations. Trends in ecology & evolution 5 (11), pp. 360–364. Cited by: §II.
  • [35] F. Zegers, H.-Y. Chen, P. Deptula, and W. E. Dixon (2019) A switched systems approach to consensus of a distributed multi-agent system with intermittent communication. In Proc. Am. Control Conf., Cited by: §V.
  • [36] S. Zhang, A. E. Choromanska, and Y. LeCun (2015) Deep learning with elastic averaging sgd. In Advances in Neural Information Processing Systems, pp. 685–693. Cited by: §IV.