Sniffy Bug: A Fully Autonomous Swarm of Gas-Seeking Nano Quadcopters in Cluttered Environments

07/12/2021 ∙ by Bardienus P. Duisterhof, et al. ∙ Delft University of Technology 0

Nano quadcopters are ideal for gas source localization (GSL) as they are safe, agile and inexpensive. However, their extremely restricted sensors and computational resources make GSL a daunting challenge. In this work, we propose a novel bug algorithm named `Sniffy Bug', which allows a fully autonomous swarm of gas-seeking nano quadcopters to localize a gas source in an unknown, cluttered and GPS-denied environments. The computationally efficient, mapless algorithm foresees in the avoidance of obstacles and other swarm members, while pursuing desired waypoints. The waypoints are first set for exploration, and, when a single swarm member has sensed the gas, by a particle swarm optimization-based procedure. We evolve all the parameters of the bug (and PSO) algorithm, using our novel simulation pipeline, `AutoGDM'. It builds on and expands open source tools in order to enable fully automated end-to-end environment generation and gas dispersion modeling, allowing for learning in simulation. Flight tests show that Sniffy Bug with evolved parameters outperforms manually selected parameters in cluttered, real-world environments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

page 7

Code Repositories

sniffy-bug

Sniffy Bug: A fully autonomous swarm of gas-seeking nano quadcopters in cluttered environments


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Gas source localization (GSL) by autonomous robots is important for search and rescue and inspection, as it is a very dangerous and time-consuming task for humans. A swarm of nano quadcopters is an ideal candidate for GSL in large, cluttered, indoor environments. The quadcopters’ tiny size allows them to fly in narrow spaces, while operating as a swarm enables them to spread out and find the gas source much quicker than a single robot would.

To enable a fully autonomous gas-seeking swarm, the nano quadcopters need to navigate in unknown, cluttered, GPS-denied environments by avoiding obstacles and each other. Currently, indoor swarm navigation is still challenging and an active research topic even for larger quadcopters ( 500 grams) [9, 36]

. State-of-the-art methods use heavy sensors like LiDAR and high-resolution cameras to construct a detailed metric map of the environment, while also estimating the robot’s position for navigation with Simultaneous Localization And Mapping (SLAM, e.g., ORB-SLAM2 

[28]). These methods do not fit within the extreme resource restrictions of nano quadcopters. The payload of nano quadcopters is in the order of grams, ruling out heavy, power-hungry sensors like LiDAR. Furthermore, SLAM algorithms typically require GBs of memory [5] and need considerable processing power. One of the most efficient implementations of SLAM runs real-time (18.21 fps) on an ODROID-XU4 [32], which has a CPU with a 4-core @ 2GHz plus a 4-core @1.3GHz. These properties rule out the use of SLAM on nano quadcopters such as the BitCraze Crazyflie, which has an STM32F405 processor with 1MB of flash memory and a single core @168MHz. As a result of the severe resource constraints, previous work has explored alternative navigation strategies. A promising solution was introduced in [24], in which a bug algorithm enabled a swarm of nano quadcopters to explore unknown, cluttered environments and come back to the starting location.

Fig. 1: A fully autonomous and collaborative swarm of gas-seeking nano quadcopters, finding and locating an isopropyl alcohol source. The source is visible in the background: a spinning fan above a can of isopropyl alcohol.

Besides navigating, the swarm also needs a robust strategy to locate the gas source, which by itself is a highly challenging task. This is mostly due to the complex spreading of gas in cluttered environments. Moreover, current sensors have poor quality compared to animals’ smelling capabilities [2], which is further complicated by the propellers’ own down-wash [12].

Various solutions to odor source localization have been studied. Probabilistic GSL strategies [29, 33]

usually internally maintain a map with the probabilities for the odor source location and often simulate the gas distribution. This is computationally challenging for nano quadcopters, a situation that deteriorates when they have to operate in environments with complex shapes, obstacles and a complex airflow field. In contrast, bio-inspired finite-state machines 

[20, 1] have very low computational requirements, though until now they have focused on deploying a single agent [23]. Moreover, reinforcement and evolutionary learning approaches have been investigated, mostly in simulation [4, 15, 16, 10, 35], but a few works also transferred the learned policy to obstacle-free environments in the real world [21, 14]. A limiting factor for approaches that learn in simulation is that gas dispersion modeling has been a time-intensive process, requiring domain knowledge for accurate modeling. Only few environments have been made available to the public [26, 7], whereas learning algorithms require many different environments.

Due to the difficulty of the problem in the real world, most often the experiments involve a single robot seeking for an odor source in an obstacle-free environment of modest size, e.g., in the order of 4 4  [8, 3, 27, 34]. Few experiments have been performed in larger areas involving multiple robots. Very promising in this area is the use of particle swarm optimization (PSO) [19], as it is able to deal with the local maxima in gas concentration that arise in more complex environments. Closest to our work is an implementation of PSO on a group of large (), outdoor flying quadcopters [31] , using LiDAR and GPS for navigation.

In this article, we introduce a novel PSO-powered bug algorithm, Sniffy Bug, to tackle the problem of GSL in challenging, cluttered, and GPS-denied environments. The nano quadcopters execute PSO by estimating their relative positions and by communicating observed gas concentrations to each other using onboard Ultra-Wideband (UWB) [22]. In order to optimize the parameters of Sniffy Bug with an artificial evolution, we also develop and present the first fully Automated end-to-end environment generation and Gas Dispersion Modeling pipeline, which we term AutoGDM. We validate our approach with robot experiments in cluttered environments, showing that evolved parameters outperform manually tuned parameters. This leads to the following three contributions:

  1. The first robotic demonstration of a swarm of autonomous nano quadcopters locating a gas source in unknown, GPS-denied environments with obstacles.

  2. A novel, computationally highly efficient bug algorithm, Sniffy Bug, of which the parameters are evolved for PSO-based gas source localization in unknown, cluttered and GPS-denied environments.

  3. The first fully automated environment generation and gas dispersion modeling pipeline, AutoGDM.

In the remainder of this article, we explain our methodology (Section II), present simulation and flight results (Section III), and draw conclusions (Section IV).

Ii Method

Ii-a System Design

Our  Bitcraze CrazyFlie [13] nano quadcopter (Figure 2), is equipped with sensors for waypoint tracking, obstacle avoidance, relative localization, communication and gas sensing. We use the optical flow deck and IMU sensors for estimating the drone’s state and tracking waypoints. Additionally, we use the BitCraze multiranger deck with four laser range sensors in the drone’s positive and negative and axis (Figure 2), to sense and avoid obstacles. Finally, we have designed a custom, open-source PCB, capable of gas sensing and relative localization. It features a Figaro TGS8100 MEMS MOX gas sensor, which is lightweight, inexpensive, low-power, and was previously deployed onboard a nano quadcopter [6]. We use it to seek isopropyl alcohol, but it is sensitive to many other substances, such as carbon monoxide. The TGS8100 changes resistance based on exposure to gas, which can be computed according to Equation 1.

(1)

Here is the sensor resistance, circuit voltage (), the voltage drop over the load resistor in a voltage divider, and is the load resistor’s resistance (). Since different sensors can have different offsets in the sensor reading, we have designed our algorithm not to need absolute measurements like a concentration in ppm. From now on, we report a corrected version of , where higher means more gas. is corrected by its initial low-passed reading, without any gas present, in order to correct sensor-specific bias.

For relative ranging, communication, and localization, we use a Decawave DWM1000 UWB module. Following [22]

, an extended Kalman filter (EKF) uses onboard sensing from all agents, measuring velocity, yaw rate, and height, which is fused with the UWB range measurements. It does not rely on external systems or magnetometers. Additionally, all agents are programmed to maintain constant yaw, as it further improves the stability and accuracy of the estimates.

Fig. 2: A  nano quadcopter, capable of autonomous waypoint tracking, obstacle avoidance, relative localization, communication and gas sensing.

Ii-B Algorithm Design

We design a highly efficient algorithm, both from a computational and sensor perspective. We generate waypoints in the reference frame of each agent using PSO, based on the relative positions and gas readings of all agents. The reference frame of the agent is initialized just before takeoff, and is maintained using dead reckoning by fusing data from the IMU and optic flow sensors. While the reference frame does drift over time, only the drift since the last seen ‘best waypoint’ in PSO will be relevant, as it will be saved in the drifted frame.

The waypoints are tracked using a bug algorithm that follows walls and other objects, moving safely around them. Each agent computes a new waypoint if it reaches within of its previous goal, if the last update was more than seconds ago, or when one of the agents smells a concentration superior to what has been seen by any agent during the run, and higher than the pre-defined detection threshold. A detection threshold is in place to avoid reacting based on sensor drift and noise. A timeout time, , is necessary, as the predicted waypoint may be unreachable (e.g., inside a closed room). We term each new waypoint, generated if one of the above criteria is met, a ‘waypoint iteration’ (e.g., waypoint iteration five is the fifth waypoint generated during that run).

Ii-B1 Waypoint generation

Each agent computes its next waypoint according to Equation 2.

(2)

Here is the goal waypoint of agent , in iteration , its position when generating the waypoint, and

its ‘velocity vector’. The velocity vector is determined depending on the drone’s mode, which can be either ‘exploring’, or ‘seeking’. ‘Exploring’ is activated when none of the agents has smelled gas, while ‘seeking’ is activated when an agent has detected gas. During exploration,

is computed with Equation 3:

(3)

Here is the goal computed in the previous iteration and a random point within a square of size around the agent’s current position. A new random point is generated each iteration. Finally, and are scalars that impact the behavior of the agent. and are initialized randomly in a square of size around the agent. Intuitively, Equation 3 shows the new velocity vector is a weighted sum of: 1) a vector toward the previously computed goal (also referred to as inertia), and 2) a vector towards a random point.

Fig. 3: Sniffy Bug’s three states: line following, wall following, and attraction-repulsion swarming.

After smelling gas, i.e., one of the agents detects a concentration above a pre-defined threshold, we update the waypoints according to Equation 4.

(4)

Here is the waypoint at which agent has seen its highest concentration so far, up to iteration . is the swarm’s best seen waypoint, up to iteration . and are random values between 0 and 1, generated for each waypoint iteration for each agent. Finally, and are scalars that impact the behavior of the agent. So again, more intuitively, the vector towards the next waypoint is a weighted sum of the vectors towards its previously computed waypoint, the best seen position by the agent, and the best seen position by the swarm. As we will see later, this allows the swarm to converge to high concentrations of gas, whilst still exploring.

Ii-B2 Waypoint tracking

Tracking waypoints in cluttered environments is hard due to the limited sensing and computational resources. Sniffy Bug is designed to operate at constant yaw, and consists of three states (Figure 3): 1) Line Following, 2) Wall Following, and 3) Attraction-Repulsion Swarming.

Line Following – When no obstacles are present, the agent follows a virtual line towards the goal waypoint, making sure to only move in directions where it can ‘see’ using a laser ranger, improving safety. The agent makes sure to stay within a distance from the line, moving as shown in Figure 3.

Wall Following – When the agent detects an obstacle, and no other agents are close, it will follow the object similar to other bug algorithms [25]. Sniffy Bug’s wall-following stage is visible in Figure 3. It is terminated if one of the following criteria is met: 1) a new waypoint has been generated, 2) the agent has avoided the obstacle, or 3) another agent is close. In case 1 and 2 line following is selected, whereas in case 3 attraction-repulsion swarming is activated. Figure 3 illustrates wall following in Sniffy Bug. The agent starts by computing , which is the laser direction that points most directly to the goal waypoint, laser 3 in this case. It then determines the initial search direction in the direction of the waypoint, anti-clockwise in this case. The agent now starts looking for laser rangers detecting a safe value (above ), starting from lasers 3, in the anti-clockwise direction. As a result, we follow the wall in a chainsaw pattern, alternating between lasers 3 and 0. Next, the agent uses odometry to detect that it has avoided the obstacle, by exiting and re-entering the green zone, while getting closer to the goal waypoint.

Attraction-Repulsion Swarming – When the agent detects at least one other agent within a distance , it uses attraction-repulsion swarming to avoid other agents and objects, while tracking its goal waypoint. This state is terminated when no agent is within , selecting ‘line following’ instead. As can be seen in Figure 3 and Equations 5,6, the final commanded velocity vector is a sum of repulsive forces away from low laser readings and close agents, while exerting an attractive force to the goal waypoint.

(5)

In Equation 5, is the attraction vector of agent at time step (so not iteration) , specifying the motion direction. Each time step the agent receives new estimates and re-computes . The first term results in repulsion away from other agents that are closer than , while the second term adds repulsion from laser rangers seeing a value lower than , and the third term adds attraction to the goal waypoint.

In the first term, is the swarm repulsion gain, and is the threshold to start avoiding agents. is the vector between agent and agent , at time step

. The rectified linear unit (

) makes sure only close agents are repulsed. In the second term, is the laser repulsion gain, and is the threshold to start repulsing a laser. is the laser reading at step , numbered according to Figure 3. makes sure only lasers recording values lower than are repulsed. is the rotation matrix, used to rotate in the direction away from laser , such that the second term adds repulsion away from low lasers. The third term adds attraction from the agent’s current position to the goal. is the vector from agent to the goal waypoint, at time step . This term is scaled to be of size , which is the desired velocity, a user-defined scalar. As a last step, we normalize to have size too, using Equation 6.

(6)

Here is the velocity vector sent to the low-level control loops. Commanding a constant speed prevents both deadlocks in which the drones hover in place and peaks in velocity that induce collisions.

Ii-C AutoGDM

Fully automated gas dispersion modeling based on Computational Fluid Dynamics (CFD) requires three main steps: 1) Environment generation, 2) CFD, and 3) Filament simulation (Figure 4).

Ii-C1 Environment Generation

We use the procedural environment generator described in [25], which can generate environments of any desired size with a number of requested rooms. Additionally, AutoGDM allows users to insert their own 2D binary occupancy images, making it possible to model any desired 2.5D environment by extruding the 2D image.

Ii-C2 Cfd

CFD consists of two main stages, i.e., meshing and solving (flow field generation). We use the open-source package OpenFOAM [18] for both stages. To feed the generated environments into OpenFOAM, the binary occupancy maps are converted into 3D triangulated surface geometries of the flow volume. We detect the largest volume in the image and declare it as our flow volume and test area. We create a mesh using OpenFOAM blockMesh and snappyHexMesh, and assign inlet and outlet boundary conditions randomly to vertical surfaces. Finally, we use OpenFOAM pimpleFOAM to solve for kinematic pressure, , and the velocity vector, .

Ii-C3 Filament simulation

In the final stage of AutoGDM, we use the GADEN ROS package [26] to model a gas source based on filament dispersion theory. It releases filaments that expand over time, and disappear when they find their way to the outlet. The expansion of the filaments and the dispersion rate of the source (i.e., filaments dispersed per second), is random within a user-defined range.

Fig. 4: AutoGDM, a fully automated environment generation and gas dispersion modeling pipeline.

Ii-D Evolutionary Optimization

We feed the generated gas data into Swarmulator111https://github.com/coppolam/swarmulator, an open source lightweight C++ simulator for simulating swarms. The agent is modelled as a point mass, which is velocity-controlled using a P controller. We describe both the environment and laser rays as a set of lines, making it extremely fast to model laser rangers. An agent ‘crashes’ when one of its laser rangers reads less than  or when another agent is closer than . The agents are fed with gas data directly from AutoGDM, which is updated every  in simulation time.

Using this simulation environment, we evolve the parameters of Sniffy Bug with the ‘simple genetic algorithm’ from the open-source PyGMO/PAGMO package 

[17]. The population consists of 50 individuals and is evolved for 400 generations. The algorithm uses tournament selection, mating through exponential crossover and mutation using a polynomial mutation operator. The mutation probability is 0.1, while the crossover probability is 0.9. The genome consists of 13 different variables, as shown in Table I, including their ranges set during evolution. Parameters that have a physical meaning when negative are allowed to be negative, while variables such as time and distance need to be positive.

Each agent’s cost is defined as its average distance to source with an added penalty (+ 1.0) for a crash. Even without a penalty the agents will learn to avoid obstacles to some capacity, but a penalty stimulates prudence. Other metrics like ‘time to source’ were also considered, but we found average distance to work best and to be most objective. It leads to finding the most direct paths and staying close to the source.

In each generation, we select environments out of the total of environments generated using AutoGDM. As considerable heterogeneity exists between environments, we may end up with a controller that is only able to solve easy environments. This is known as the problem of hard instances. To tackle this problem, we study the effect of ‘doping’ [30] on performance in simulation. When using doping, the probability of selecting environment number is:

(7)

is the ‘difficulty’ of environment , computed based on previous experience. If environment is selected to be evaluated, we save the median of all 50 costs in the population. We use median instead of mean to avoid a small number of poor-performing agents to have a large impact. is the mean of the last three recorded medians. If no history is present, is the average of all difficulties of all environments. Using Equation 7 implies that we start with an even distribution, but over time give environments with greater difficulty a higher chance to be selected in each draw. When not using doping, we set

, resulting in a uniform distribution.

Fig. 5: Environment selection probability for all 100 environments at the end of evolution. The environments below the x-axis show that harder environments, with higher , contain more obstacles and local optima.

Iii Results

Variable Manually Selected Evolved Evolution range
0.5 0.271 [-5,5]
0.8 -0.333 [-5,5]
2.0 1.856 [-5,5]
0.3 1.571 [-5,5]
0.7 2.034 [0,5]
10.0 51.979 [0,100]
0.5 2.690 [0,5]
1.5 1.407 [0,5]
1.5 0.782 [0,5]
0.2 0.469 [0,1]
5.0 16.167 [0,20]
15.0 10.032 [0,20]
1.5 0.594 [0,5]
TABLE I: Parameters evolved in evolution using doping, consult Section II-B for the meaning of the variables.

Iii-a Training in Simulation

For evolution, we used AutoGDM to randomly generate 100 environments of 10 10  in size, the size of our experimental arena. This size is sufficiently large for creating environments with separate rooms, in which local maxima of gas concentration may exist. At the same time, it is sufficiently small for exploration by a limited number of robots. We use 3 agents, with . Figure 5 shows that the generated environments differ in obstacle configuration and gas distribution. During each generation, every genome is evaluated on a random set of 10 out of the total 100 environments, with a maximum duration per run of . All headings and starting positions are randomized after each generation. Agents are spawned in some part of the area so that a path exists towards the gas source.

We assess training results for training with doping. Table I shows the evolved parameters in comparison with the manually designed parameters. is set to , creating random waypoints in a box of  in size around the agent during ‘exploring’. This box is scaled by evolved parameter (Equation 3). When generating new waypoints, the agent has learned to move strongly towards the swarm’s best-seen position , away from its personal best-seen position , and towards its previously computed goal . We expect this to be best in the environments encountered in evolution, as in most cases only one optimal position (with the highest concentration) exists. Hence, it does not need its personal best position to avoid converging to local optima. adds ‘momentum’ to the waypoints generated, increasing stability.

shows that attraction-repulsion swarming is engaged only when another agent is within . This is substantially lower than the manually chosen , which can be explained by a lower cost when agents stay close to each other after finding the source. It also makes it easier to pass through narrow passages, but could increase the collision risk when agents are close to each other for a long time. Furthermore, is set to  , which is much higher than our manual choice and the timer has been practically disabled ( = 51.979). Instead of using the timeout, the evolved version uses PSO to determine the desired direction to follow, until it has travelled far enough and generates a new waypoint. For obstacle avoidance we see that the manual parameters are more conservative than the evolved counterparts. Being less conservative allows the agents to get closer to the source quicker, at the cost of an increased collision risk. This is an interesting trade-off, balancing individual collision risks with more efficient gas source localization.

After training, the probability for each environment in each draw can be evaluated as a measure of difficulty. Figure 5 shows a histogram of all 100 probabilities along with some environments on the spectrum. Generally, more cluttered environments with more local minima are more difficult.

Iii-B Baseline Comparison in Simulation

It is difficult to compare Sniffy Bug with a baseline algorithm, since it would need to seek a gas source in cluttered environments, avoiding obstacles with only four laser rangers and navigating without external positioning. To the best of our knowledge, such an algorithm does not yet exist. Hence, we replace the gas-seeking part of Sniffy Bug (PSO) by two other well-known strategies, leading to (i) Sniffy Bug with anemotaxis, and (ii) Sniffy Bug with chemotaxis. In the case of anemotaxis, inspired by [2], waypoints are placed randomly when no gas is present, and upwind when gas is detected. Then, when the agent loses track of the plume, it generates random waypoints around the last point it has seen inside the plume, until it finds the plume again. Similarly, the chemotaxis baseline uses Sniffy Bug for avoidance of obstacles and other agents, but places waypoints in the direction of the gas concentration gradient if a gradient is present. The chemotaxis baseline determines the gradient by moving a small distance in and before computing a new waypoint, while the anemotaxis baseline receives ground-truth airflow data straight from AutoGDM. We expect that the baseline algorithms’ sensor measurements - and hence performance - would transfer less well to real experiments. For chemotaxis we expect that estimating the gradient in presence of noise and drift of the sensor data will yield inaccurate measurements. Anemotaxis based on measuring the weak airflow in indoor environments will be even more challenging.

We evaluate the models in simulation on all 100 generated test environments, and randomize start position 10 times in each environment, for a total of 1,000 runs for each model. We record different performance metrics: 1) success rate, 2) average distance to source, and 3) average time to source. Success rate is defined as the fraction of runs during which at least one of the agents reaches within  from the source, whereas average time to source is the average time it takes an agent to reach within  from the source. For agents not reaching the source,  is taken as time to source.

Success Rate
Avg Distance
to Source [m]
Avg time
to source [s]
Manual params PSO 90 % 3.06 51.58
Manual params Anemotaxis 91 % 4.12 60.56
Manual params Chemotaxis 80 % 4.31 68.61
Evolved PSO w/o Doping 89 % 2.95 44.33
Evolved PSO with Doping 92 % 2.75 41.94
TABLE II: Sniffy Bug evaluated on 100 randomly generated environments.

Table II shows that Sniffy Bug with PSO outperforms anemotaxis and chemotaxis in average time and distance to source, and has a success rate similar to the anemotaxis baseline. Chemotaxis suffers from the gas gradient not always pointing in the direction of the source. Due to local optima, and a lack of observable gradient further away from the source, chemotaxis is not as effective as PSO or anemotaxis. Since we expect the sim2real gap to be much more severe for anemotaxis than for PSO, which only needs gas readings, we decide to proceed with PSO.

Fig. 6: Sniffy Bug with manual parameters, successfully locating the source.
Fig. 7: Sniffy Bug with parameters evolved using doping, finding the source quicker.

Iii-C Doping in Simulation

We now compare Sniffy Bug using PSO with manual parameters, parameters evolved without doping, and parameters evolved with doping. Table II shows that the evolved parameters without doping find the source quicker, and with a smaller average distance to source, compared to the manual parameters. However, its success rate is slightly inferior to the manual parameters. This is caused by the hard instance problem [30]: the parameters perform well on average but fail on the more difficult environments.

Figures 7 and 7 show runs in simulation of the manual and evolved version with doping respectively, with the same initial conditions. The parameters evolved with doping outperform the other controllers in all metrics. Doping helps to tackle harder environments, improving success rate and thereby average distance to source and time to source. This effect is further exemplified by Figure 8. The doping controller does not only result in a lower average distance to source, it also shows a smaller spread. Doping managed to reduce the size of the long tail that is present in the set of evolved parameters.

As we test on a finite set of environments, and the assumptions of normality may not hold, we use the empirical bootstrap [11]

to compare the means of the average distance to source. Using 1,000,000 bootstrap iterations, we find that only when comparing the manual parameters with evolved parameters with doping, the null hypothesis can be rejected, with

, showing that parameters evolved with doping perform significantly better than manual parameters.

Fig. 8: Three sets of parameters evaluated in simulation: 1) manually set parameters, 2) parameters evolved without doping, and 3) parameters evolved using doping [30] to address the hard instance problem.

Iii-D Flight Tests

Finally, we transfer the evolved solution to the real world, validating our methodology. We deploy our swarm in four different environments of 10 10  in size, as shown in Figure 9. We place a small can of isopropyl alcohol with a 5V computer fan within the room as the gas source. We compare manual parameters with the parameters evolved using doping, by comparing their recorded gas readings. Each set of parameters is evaluated three times for each environment, resulting in a total of 24 runs. A run is terminated when: 1) the swarm is stationary and close to the source, or 2) at least two members of the swarm crash. Each run lasts at least . Figure 10 corresponds to the run depicted in Figure (a)a.

(a) Environment 1: the agents need to explore the ‘room’ in the corner to smell the gas.
(b) Environment 2: the agents need to go around a long wall to locate the source.
(c) Environment 3: an easier environment with some obstacles.
(d) Environment 4: an empty environment.
Fig. 9: Time-lapse images of real-world experiments in four distinct environment setups, 10 10  in size, seeking a real isopropyl alcohol source. The nano quadcopters’ trajectories are visible due to their blue lights.
Fig. 10: Evolved Sniffy Bug seeking an isopropyl alcohol gas source in environment 1. After agent 3 finds the source, all three agents quickly find their way to higher concentrations.

Figure 11 shows the maximum recorded gas readings by the swarm, for each time step for each run. Especially for environment 1, it clearly shows the more efficient source seeking behavior of our evolved controller. Table III shows the average and maximum observed concentrations by the swarm, averaged over all three runs per environment. It shows that for environments with obstacles, our evolved controller outperforms the manual controller in average observed gas readings and average maximum observed gas readings.

The evolved controller was able to reach the source within  in 11 out of 12 runs, with one failed run due to converging towards a local optimum in environment 4 (possibly due to sensor drift). The manual parameters failed once in environment 2 and once in environment 3. The manual parameters were less safe around obstacles, recording a total of 3 crashes in environments 1 and 2. The evolved parameters recorded only one obstacle crash in all runs (environment 2). We hypothesize that the more efficient, evolved GSL strategy reduces the time around dangerous obstacles.

On the other hand, the evolved parameters recorded 2 drone crashes, both in environment 4, when the agents were really close to each other and the source for extended periods of time. The manual parameters result in more distance between agents, making it more robust against downwash and momentarily poor relative position estimates. This can be avoided in future work by a higher penalty for collisions during evolution, or classifying a run as a crash when agents are, for instance,

 away from each other instead of .

(a) Environment 1.
(b) Environment 2.
(c) Environment 3.
(d) Environment 4.
Fig. 11: Maximum recorded gas reading by the swarm, for each time step for each run.
Manual Evolved
Avg std Max std Avg std Max std
Env 1 0.250 0.036 0.406 0.049 0.330 0.046 0.566 0.069
Env 2 0.162 0.055 0.214 0.070 0.165 0.046 0.237 0.063
Env 3 0.200 0.074 0.300 0.103 0.258 0.045 0.412 0.029
Env 4 0.240 0.123 0.398 0.143 0.176 0.062 0.349 0.151
TABLE III: Average and maximum smelled concentration by the swarm, for manual and evolved parameters, averaged over 3 runs.

The results show that AutoGDM can be used to evolve a controller that not only works in the real world in challenging conditions, but even outperforms manually chosen parameters. While GADEN [26] and OpenFOAM [18] are by themselves already validated packages, the results corroborate the validity of our simulation pipeline, from the randomization of the source position and boundary conditions to the simulated drones’ particle motion model.

Iv Conclusion

We have introduced a novel bug algorithm, Sniffy Bug, leading to the first fully autonomous swarm of gas-seeking nano quadcopters. The parameters of the algorithm are evolved, outperforming a human-designed controller in all metrics in simulation and robot experiments. We evolve the parameters of the bug algorithm in simulation and successfully transfer the solution to a challenging real-world environment. We also contribute the first fully automated environment generation and gas dispersion modeling pipeline, AutoGDM, that allows for learning GSL in simulation in complex environments.

In future work, our methodology may be extended to larger swarms of nano quadcopters, exploring buildings and seeking a gas source fully autonomously. PSO was designed to work in large optimization problems with many local optima, and is likely to extend to more complex configurations and to GSL in 3D. Finally, we hope that our approach can serve as inspiration for tackling also other complex tasks with swarms of resource-constrained nano quadcopters.

References

  • [1] J. Adler (1976) The sensing of chemicals by bacteria. Scientific American 234 (4), pp. 40–47. External Links: ISSN 00368733, 19467087, Link Cited by: §I.
  • [2] M. J. Anderson, J. G. Sullivan, T. Horiuchi, S. B. Fuller, and T. L. Daniel (2020) A bio-hybrid odor-guided autonomous palm-sized air vehicle. Bioinspiration & Biomimetics. External Links: Link Cited by: §I, §III-B.
  • [3] M. Asenov, M. Rutkauskas, D. Reid, K. Subr, and S. Ramamoorthy (2019) Active localization of gas leaks using fluid simulation. IEEE Robotics and Automation Letters 4 (2), pp. 1776–1783. External Links: Document, 1901.09608, ISSN 23773766 Cited by: §I.
  • [4] R. D. Beer and J. C. Gallagher (1992)

    Evolving Dynamical Neural Networks for Adaptive Behavior

    .
    Vol. 1. External Links: Document, ISBN 1059712392001, ISSN 17412633 Cited by: §I.
  • [5] B. Bodin, H. Wagstaff, S. Saecdi, L. Nardi, E. Vespa, J. Mawer, A. Nisbet, M. Lujan, S. Furber, A. J. Davison, P. H. J. Kelly, and M. F. P. O’Boyle (2018) SLAMBench2: multi-objective head-to-head benchmarking for visual slam. In 2018 IEEE International Conference on Robotics and Automation (ICRA), Vol. , pp. 3637–3644. External Links: Document Cited by: §I.
  • [6] J. Burgués, V. Hernández, A. J. Lilienthal, and S. Marco (2019) Smelling nano aerial vehicle for gas source localization and mapping. Sensors (Switzerland) 19 (3). External Links: Document, ISSN 14248220 Cited by: §II-A.
  • [7] G. Cabrita, P. Sousa, and L. Marques (2010-11) Player/stage simulation of olfactory experiments. pp. 1120 – 1125. External Links: Document Cited by: §I.
  • [8] X. Chen and J. Huang (2020) Combining particle filter algorithm with bio-inspired anemotaxis behavior: a smoke plume tracking method and its robotic experiment validation. Measurement 154, pp. 107482. External Links: ISSN 0263-2241, Document, Link Cited by: §I.
  • [9] M. Coppola, K. N. McGuire, C. De Wagter, and G. C. H. E. de Croon (2020) A survey on swarming with micro air vehicles: fundamental challenges and constraints. Frontiers in Robotics and AI 7, pp. 18. External Links: Link, Document, ISSN 2296-9144 Cited by: §I.
  • [10] G.C.H.E. de Croon, L.M. O’Connor, C. Nicol, and D. Izzo (2013) Evolutionary robotics approach to odor source localization. Neurocomputing 121 (December), pp. 481–497. External Links: Document, ISSN 09252312 Cited by: §I.
  • [11] B. Efron (1979-01) Bootstrap methods: another look at the jackknife. Ann. Statist. 7 (1), pp. 1–26. External Links: Document, Link Cited by: §III-C.
  • [12] C. Ercolani and A. Martinoli (2020) 3D odor source localization using a micro aerial vehicle: system design and performance evaluation. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6194–6200. Cited by: §I.
  • [13] W. Giernacki, M. Skwierczyński, W. Witwicki, P. Wroński, and P. Kozierski (2017) Crazyflie 2.0 quadrotor as a platform for research and education in robotics and control engineering. In 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), Vol. , pp. 37–42. Cited by: §II-A.
  • [14] A. T. Hayes, A. Martinoli, and R. M. Goodman (2003) Swarm robotic odor localization: off-line optimization and validation with real robots. Robotica 21 (4), pp. 427–441. External Links: Document Cited by: §I.
  • [15] E. J. Izquierdo and T. Buhrmann (2008)

    Analysis of a dynamical recurrent neural network evolved for two qualitatively different tasks: Walking and chemotaxis

    .
    Artificial Life XI: Proceedings of the 11th International Conference on the Simulation and Synthesis of Living Systems, ALIFE 2008, pp. 257–264. External Links: ISBN 9780262750172 Cited by: §I.
  • [16] E. J. Izquierdo and S. R. Lockery (2010) Evolution and analysis of minimal neural circuits for klinotaxis in Caenorhabditis elegans. Journal of Neuroscience 30 (39), pp. 12908–12917. External Links: Document, ISSN 15292401 Cited by: §I.
  • [17] D. Izzo, M. Ruciński, and F. Biscani (2012-01) The generalized island model. Vol. 415, pp. 151–169. External Links: ISBN 978-3-642-28788-6, Document Cited by: §II-D.
  • [18] H. Jasak (2009) OpenFOAM: open source CFD in research and industry. International Journal of Naval Architecture and Ocean Engineering 1 (2), pp. 89 – 94. External Links: ISSN 2092-6782, Document, Link Cited by: §II-C2, §III-D.
  • [19] W. Jatmiko, K. Sekiyama, and T. Fukuda (2007) A PSO-based mobile robot for odor source localization in dynamic advection-diffusion with obstacles environment: theory, simulation and measurement. IEEE Computational Intelligence Magazine 2 (2), pp. 37–51. Cited by: §I.
  • [20] Y. Kuwana, S. Nagasawa, I. Shimoyama, and R. Kanzaki (1999) Synthesis of the pheromone-oriented behaviour of silkworm moths by a mobile robot with moth antennae as pheromone sensors.. Biosensors and Bioelectronics 14 (2), pp. 195 – 202. External Links: ISSN 0956-5663, Document, Link Cited by: §I.
  • [21] Y. Kuwana, I. Shimoyama, Y. Sayama, and H. Miura (1996) Synthesis of pheromone-oriented emergent behavior of a silkworm moth. IEEE International Conference on Intelligent Robots and Systems 3, pp. 1722–1729. External Links: Document, ISBN 078033213X Cited by: §I.
  • [22] S. Li, M. Coppola, C. D. Wagter, and G. C. H. E. de Croon (2020) An autonomous swarm of micro flying robots with range-based relative localization. Note: https://arxiv.org/abs/2003.05853 External Links: 2003.05853 Cited by: §I, §II-A.
  • [23] T. Lochmatter and A. Martinoli (2009) Understanding the potential impact of multiple robots in odor source localization. In Distributed Autonomous Robotic Systems 8, H. Asama, H. Kurokawa, J. Ota, and K. Sekiyama (Eds.), pp. 239–250. External Links: ISBN 978-3-642-00644-9, Document, Link Cited by: §I.
  • [24] K. N. McGuire, C. De Wagter, K. Tuyls, H. J. Kappen, and G. C. H. E. de Croon (2019) Minimal navigation solution for a swarm of tiny flying robots to explore an unknown environment. Science Robotics 4 (35). External Links: Document, Link, https://robotics.sciencemag.org/content/4/35/eaaw9710.full.pdf Cited by: §I.
  • [25] K.N. McGuire, G.C.H.E. de Croon, and K. Tuyls (2019) A comparative study of bug algorithms for robot navigation. Robotics and Autonomous Systems 121, pp. 103261. External Links: ISSN 0921-8890, Document, Link Cited by: §II-B2, §II-C1.
  • [26] J. Monroy, V. Hernandez-Bennetts, H. Fan, A. Lilienthal, and J. Gonzalez-Jimenez (2017) GADEN: a 3D gas dispersion simulator for mobile robot olfaction in realistic environments. MDPI Sensors 17 (7: 1479), pp. 1–16. External Links: ISSN 1424-8220, Link, Document Cited by: §I, §II-C3, §III-D.
  • [27] E. M. Moraud and D. Martinez (2010) Effectiveness and robustness of robot infotaxis for searching in dilute conditions. Frontiers in Neurorobotics 4 (MAR), pp. 1–8. External Links: Document, ISSN 16625218 Cited by: §I.
  • [28] R. Mur-Artal and J. D. Tardós (2017) ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics 33 (5), pp. 1255–1262. External Links: Document Cited by: §I.
  • [29] C. Song, Y. He, B. Ristic, and X. Lei (2020) Collaborative infotaxis: Searching for a signal-emitting source based on particle filter and Gaussian fitting. Robotics and Autonomous Systems 125, pp. 103414. External Links: Document, ISSN 09218890, Link Cited by: §I.
  • [30] P. Spronck, I. Sprinkhuizen-Kuyper, and E. Postma (2008-03) DECA: the doping-driven evolutionary control algorithm.

    Applied Artificial Intelligence

    22, pp. 169–197.
    External Links: Document Cited by: §II-D, Fig. 8, §III-C.
  • [31] J. A. Steiner, J. R. Bourne, X. He, D. M. Cropek, and K. K. Leang (2019) Chemical-source localization using a swarm of decentralized unmanned aerial vehicles for urban/suburban environments. ASME Dynamic Systems and Control Conference, DSCC 2019 3. External Links: Document, ISBN 9780791859162 Cited by: §I.
  • [32] E. Tang, S. Niknam, and T. Stefanov (2019) Enabling cognitive autonomy on small drones by efficient on-board embedded computing: an ORB-SLAM2 case study. In 2019 22nd Euromicro Conference on Digital System Design (DSD), Vol. , pp. 108–115. External Links: Document Cited by: §I.
  • [33] M. Vergassola, E. Villermaux, and B. I. Shraiman (2007) ’Infotaxis’ as a strategy for searching without gradients. Nature 445 (7126), pp. 406–409. External Links: Document, ISSN 14764687 Cited by: §I.
  • [34] N. Voges, A. Chaffiol, P. Lucas, and D. Martinez (2014-10) Reactive searching and infotaxis in odor source localization. PLOS Computational Biology 10 (10), pp. 1–13. External Links: Link, Document Cited by: §I.
  • [35] J. L. Wei, Q. H. Meng, C. Yan, M. Zeng, and W. Li (2012)

    Multi-Robot gas-source localization based on reinforcement learning

    .
    2012 IEEE International Conference on Robotics and Biomimetics, ROBIO 2012 - Conference Digest, pp. 1440–1445. External Links: Document, ISBN 9781467321273 Cited by: §I.
  • [36] H. Xu, L. Wang, Y. Zhang, K. Qiu, and S. Shen (2020) Decentralized visual-inertial-uwb fusion for relative state estimation of aerial swarm. IEEE International Conference on Robotics and Automation (ICRA). External Links: ISBN 9781728173955, Link, Document Cited by: §I.