A Swarm of Simple Robots Constructing Planar Shapes

We present a new version of our previously proposed algorithm enabling a swarm of robots to construct a desired shape from objects in the plane. We also describe a hardware realization for this system which makes use of simple and readily sourced components. We refer to the task as planar construction which is the gathering of ambient objects into some desired shape. As an example application, a swarm of robots could use this algorithm to not only gather waste material into a pile, but shape that pile into a line for easy collection. The shape is specified by an image known as the scalar field. The scalar field serves an analogous role to the template pheromones that guide the construction of complex natural structures such as termite mounds. In addition to describing the algorithm and hardware platform, we develop some performance insights using a custom simulation environment and present experimental results on physical robots.

READ FULL TEXT VIEW PDF

page 2

page 4

page 5

page 6

09/08/2019

ShapeBots: Shape-changing Swarm Robots

We introduce shape-changing swarm robots. A swarm of self-transformable ...
05/27/2021

Line Marching Algorithm For Planar Kinematic Swarm Robots: A Dynamic Leader-Follower Approach

Most of the existing formation algorithms for multiagent systems are ful...
11/22/2021

Universal Swarm Computing by Nanorobots

Realization of universal computing units for nanorobots is highly promis...
08/08/2022

Origami-based Zygote structure enables pluripotent shape-transforming deployable structure

We propose an algorithmic framework of a pluripotent structure evolving ...
10/29/2018

A Distributed Epigenetic Shape Formation and Regeneration Algorithm for a Swarm of Robots

Living cells exhibit both growth and regeneration of body tissues. Epige...
02/22/2022

Swarm Fabrication: Reconfigurable 3D Printers and Drawing Plotters Made of Swarm Robots

We introduce Swarm Fabrication, a novel concept of creating on-demand, s...
08/20/2021

Deadlock and Noise in Self-Organized Aggregation Without Computation

Aggregation is a fundamental behavior for swarm robotics that requires a...

I Introduction

A swarm of simple robots able to manipulate objects in their environment, configuring them into a desired shape has potential applications in cleaning, waste collection, recycling and in construction. So far, collective robot construction has most often considered the use of specially designed materials, paired with custom robot hardware [petersen2019review]. But we believe there is great potential in using readily-sourced components and available robot platforms to build swarms that can manipulate objects in their environment. Our approach requires no exotic hardware and incurs very little computational cost. To specify the shape to be constructed we take inspiration from the social insects which make use of pheromones to specify the shape of their nest structure, making use of self-organizing processes during its formation [camazine2003self]. Ladley and Bullock proposed a model whereby the stationary queen termite exudes a template pheromone causing workers to deposit material at a characteristic distance from the queen, eventually leading to the constructing of a “royal chamber” [ladley05logistic]. We are interested in the intersection of the capabilities of simple readily-sourced robots and the possibilities inherent in a signal playing the role of template pheromone to construct particular shapes from objects in the environment.

We also draw inspiration from the study of object clustering, pioneered as a concept by Deneubourg et al. [deneubourg90sorting] and demonstrated in real-world robots by a number of researchers (see [vardy14cache] for a review). We are particularly inspired by a paper from Gauci et al. which demonstrated an extremely simple object clustering algorithm discovered via evolutionary search [gauci14clustering]. This algorithm drives each robot toward the periphery of the objects within the environment. The robots then begin to encircle the objects while bumping against them, thus shifting them inwards incrementally. We previously combined this approach with the template pheromone concept and proposed an algorithm called orbital construction (OC) [vardy2018orbital]. In this paper we propose a new version of this algorithm which is more flexible and accounts for the presence of other robots which may block movement. We also present a hardware platform that will allow us to demonstrate the algorithm’s performance.

The OC algorithm, like Gauci et al. ’s, is purely reactive and is therefore easy to describe and implement. It also involves very few parameters which need to be tuned. We previously proposed a somewhat more complex algorithm for planar construction which relies upon a set of distinct landmarks to specify the shape of the desired structure [vardy2019landmark]

. In addition, we have investigated the use of reinforcement learning for the planar construction problem

[strickland19rl]. Using the scalar field as part of the state space proved to be highly successful and allowed for very efficient learning of an effective policy. We have also studied the allocation of different roles to different agents and found that a local communication strategy outperformed global communication [ibrahim19adaptive].

The planar construction problem can be considered a sub-area of collective robotic construction, which was recently reviewed in [petersen2019review]. Planar construction is the formation of a desired two-dimensional structure from ambient objects in the environment. This problem entails a combination of the discovery and transport of objects, as well as manipulating these objects into a desired shape. As such, the problem is related to work on foraging [ostergaard01bucket, shell06foraging, lein08adaptive], clustering objects of a single type [deneubourg90sorting, beckers94stigmergy, maris96heap, martinoli99probabilistic, kazadi02convergence, vardy14cache], sorting objects of different types [melhuish98collective, melhuish01patch, wang03sorting, verret04sorting, melhuish06clustering] and the construction of desired shapes [crabbe1999second, kazadi04swarm, stewart06distributed, soleymani2014autonomous].

The remainder of this paper is organized as follows. Section II will discuss our hardware platform. Section III will cover the algorithm and Section IV will focus on experimental results, both in simulation and on our physical robots. Brief conclusions will follow in Section V.

Ii Hardware

Our robot, shown in Figure 1, is built upon the Zumo 32U4 robot, a 10 10 cm tracked differential-drive platform111https://www.pololu.com/docs/0J63. The Zumo 32U4 is equipped with a linear array of 5 infrared reflectance sensors intended to detect black lines on white surfaces. Each reflectance sensor consists of an infrared emitter/detector pair. We modified 4 of these sensors (outlined in red in the bottom image of Figure 1) by replacing their infrared spectrum detectors with visible spectrum (600nm) phototransistors manufactured by Rohm Semiconductor (RPM-075PTT86). As opposed to emitting light and sensing its reflection, our robots operate on a 75 inch diagonal LCD television manufactured by LG (75UK6190). This television provides the light source for these phototransistors. The image corresponding to the scalar field is projected on the television and then sensed. Our algorithm requires sampling the scalar field at three points arranged in a triangle. Since the physical sensors are co-linear we place the values obtained from the left and right sensors into a fixed-length queue to simulate sensors placed further back from the sensor array. The middle two phototransistor values are averaged to produce the centre value of the triangle. This approach is feasible because our robots generally move forwards, so older sensor values are similar to those that would appear to more posterior sensors.

Connected to the Zumo is a Pixy222https://pixycam.com/pixy-cmucam5/

vision sensor which does on-board color segmentation and connected components labelling. The Pixy produces a list of blocks, classified according to color as ‘puck’ or ‘robot’. Figure

2 shows the Pixy’s view of surrounding robots and pucks (red Lego pieces). Note that a wide-angle lens has been installed on the Pixy. Our algorithm relies upon the detection of pucks or other robots which are known to be on the left or right. Rectangular blocks from the Pixy can easily be distinguished as being in the left or right half of the image. Let indicate the coordinate of block ’s center, and let be the block’s width. If is the image width and then the block is in the left half of the image. If then the block is in the right half. A block may also straddle both sides of the image. Note that we assume that the mid-point of the image at is aligned with the robot’s centre which is the case for our robots.

The list of blocks detected by the Pixy is transmitted to the Zumo’s ATmega32U4 microcontroller via I2C. The current drawn by the Pixy is quite low (140 mA @ 5V) in comparison to general-purpose vision systems of a similar size and price point (e.g. a Raspberry Pi) allowing the whole system to be powered by the Zumo’s on-board set of 4 standard AA batteries.

Control code executes directly on the Zumo’s microcontroller, the ATmega32U4. Also note that around the robot’s perimeter is a skirt made of foam board attached to an acrylic plate mounted to the Zumo. This skirt gives the robot a pointed wedge allowing it to extract objects that are adjacent to the sides and corners of the environment.

Fig. 1: Top: The robot. Bottom: Underside view of the Zumo32U4 with the 4 replaced reflectance sensors circled in red. The sensors used to produce the values , and are also highlighted in red.
Fig. 2: Image taken from the Pixy camera showing nearby pucks and another robot. The annotated objects (‘puck’ and ‘robot’) are those detected by the Pixy.

Iii Algorithm

Our algorithm takes as input a set of sensor variables and produces the robot’s forward speed and angular speed . The sensor variables are described below.

[labelsep=1em]

(Scalar) Value of the scalar field as measured by the left sensor (simulated using a queue, as mentioned earlier).

(Scalar) Value of the scalar field measured by the centre sensor. is obtained by averaging the two middle sensors of the reflectance array (see Figure 1).

(Scalar) Value of the scalar field as measured by the right sensor (simulated using a queue).

(Boolean) Indicates the presence of pucks in the left half of the image (i.e. on the robot’s left).

(Boolean) Indicates the presence of other robots in the left half of the image.

(Boolean) Indicates the presence of other robots in the right half of the image.

The algorithm (see Algorithm 1) takes inspiration from Gauci et al. in reacting to pucks by moving forward while oscillating between veering left and right so that the puck’s ‘outer’ edge becomes the fixation point and the robot bumps into that edge, nudging the puck inwards towards the growing structure. Whenever not reacting to a puck or another robot, it will guide the robot in a clockwise pattern around the structure.

A departure from the original algorithm is the way in which the desired shape is encoded in the scalar field. Here the scalar field is set to zero to indicate a goal region where objects are to be collected. In the original algorithm a particular threshold value of the field was sought, but the phototransistors used by our robots are sensitive to various noise sources and we found it more robust to sense the abrupt transition to zero as indicating the goal region.

The algorithm is applied on every time step and is reactive (i.e. stateless) with the exception of the recovery action (lines 1-4) which is used to get the robot unstuck from the boundary or from other robots. The algorithm’s output is an ordered pair giving the forward and angular speeds of the robot. Except when in the recovery state, these values are always chosen so that the robot moves forwards, veering to the left or right.

The original orbital construction algorithm mapped particular orderings of the scalar field sensors into actions [vardy2018orbital], but this mapping was not flexible. In this version this mapping from orderings to actions is given by the parameters PUCK_MASK and ALIGN_MASK. Lines 5-16 determine the order of the three scalar field sensors, , and , which approximately capture the orientation of the robot with respect to the scalar field. Note that the conditions on lines 19 and 21 stipulate that the actions that follow on lines 20 and 22 (respectively) can only occur for certain values of the order variable—that is, certain orientations of the robot. As an example, the condition on line 19 is active whenever the robot sees a puck on its left and then turns to collect it. Yet, if the ordering is this means the robot is turned away from the structure and oriented counter-clockwise. If it tries to collect the puck it will strongly deviate from the nominal clockwise flow around the structure and perhaps get in the way of its peers. The sixth bit of PUCK_MASK should be 0 to prevent this. Similarly, the ALIGN_MASK parameter determines for which orientations the robot will veer left or right if there is no puck on the left.

The physical arrangement of sensors will dictate the best values to choose for parameters PUCK_MASK and ALIGN_MASK. We used the simulator described in the next section to exhaustively test all 64 values for both PUCK_MASK and ALIGN_MASK. We found that the values of PUCK_MASK = 13 and ALIGN_MASK = 18 worked best for both single- or multiple-robot simulations.

Given the suitable settings for PUCK_MASK and ALIGN_MASK the robots will be guided in a clockwise pattern around the structure. Figure 3

shows vectors corresponding to how robots would move around two different shapes: a line segment and a letter ‘L’.

input :  , , , , ,
output : (forward speed, angular speed)
params :  , , PUCK_MASK , ALIGN_MASK
// Handle getting stuck
1 if scalar field sensors not changing
2       set recovery timer;
3      
4if recovery timer not elapsed
5       return randomized reverse ;
6      
// Set order as a binary number
7 if 
8       order = 0b000001;
9      
10 else if 
11       order = 0b000010;
12      
13 else if 
14       order = 0b000100;
15      
16 else if 
17       order = 0b001000;
18      
19 else if 
20       order = 0b010000;
21      
22 else if 
23       order = 0b100000;
24      
// Choose action
25 if  indicates black and
       // Veer left away from goal zone
26       return ;
27      
28 else if 
       // Veer left to gather puck
29       return ;
30      
31 else if 
       // Veer left to align with scalar field
32       return ;
33      
34 else if 
       // Veer right
35       return ;
36      
// Go slowly forwards
37 return ;
Algorithm 1 Orbital Construction 2.0
Fig. 3: Vectors showing flow of robots in the absence of pucks or other robots for the line and ‘L’ shapes.

Iv Results

Iv-a Simulation

We have developed a custom simulation environment333https://github.com/BOTSlab/cwaggle/tree/orbital_av to predict how the algorithm will perform in different situations. This simulator is a fork of the cwaggle simulator444https://github.com/davechurchill/cwaggle developed by David Churchill which achieves a very high update rate by a design that optimizes cache coherency and simulates almost all bodies and sensors as circles. Figure 4 presents a screenshot. The blue robot is highlighted and the circular regions representing its sensors are shown. The larger circular region represents the field-of-view of its camera. This field-of-view is restricted by the smaller circle for sensing other robots.

At the moment shown in Figure

4 the blue robot senses the puck on the bottom left. The robot would normally turn left to collect this puck, but it also senses the other robot on its left. So it will go slowly forwards as dictated by line 25 of the algorithm.

Fig. 4: Screenshot from our simulator with 4 robots constructing a line segment.

This simulator allows us to make predictions about performance in advance of hardware experiments. It is naturally important to understand how the algorithm performs when the number of robots is varied. Figure 5 shows this variation over 30 trials for each number of robots applied to the line segment shape with 40 pucks present. We measure performance by the proportion of pucks touching the goal region. With 1 robot, a high value is eventually reached, but with 2 or 4 robots a similar proportion is reached much more quickly. However, with 8 or 16 robots the performance suffers as the degree of spatial interference increases.

Fig. 5:

Performance in simulator on the line shape when the number of robots is varied. Each trace is an average of 30 trials. Shaded regions with matched colors represent 95% confidence intervals.

To probe the variety of shapes that can be produced we tested an ‘L’ shape. The concavity of this shape presents somewhat of a challenge. The top image in Figure 6 shows the converged shape when the radius of puck sensing is set to the default value of 420 (the simulator uses units of pixels). Clearly the interior of the shape is not well-formed. This is because the algorithm reacts to any puck sensed on its left, so any pucks on the bottom-right of the ‘L’ will always draw the robot away from the interior. We can address this by reducing the radius within which pucks are detected to 100, which is shown in the middle image of Figure 6. Here the shape is much more accurate. However, if the puck-sensing radius is reduced further to 80 then the robots lose the ability to gather all pucks into the structure as shown in the bottom image.

Fig. 6: Simulation result on the ‘L’ shape with the puck-sensing radius set to 420 (top), 100 (middle) and 80 (bottom).
Fig. 7: Performance in simulator on the line shape when all pucks are randomly repositioned at time step 2500. Conditions as per Figure 5.

Another interesting aspect is the ability of our method to withstand perturbations. In trials involving 4 robots, all pucks were randomly repositioned mid-trial (time step 2500). Figure 7 shows that the system manages to recover from this perturbation and regain a similar threshold of performance.

Iv-B Physical Robots

We completed trials on our physical robots using a scalar field image specifying a line segment. The robots and 40 square pieces of red Lego were randomly positioned within the arena at the start of each trial. These items were placed at least 5 cm away from the boundary of the environment. Each trial was executed for 4 minutes and judged based on the number of pucks touching the goal region at the end of the trial. For each number of robots the trial was repeated 3 times. No claims of statistical significance can be drawn but these results do provide a useful qualitative assessment. A summary of these trials is found in table I. Note that the second column in table I refers to the number of pucks within or touching the goal region at the end of the trial.

=1.05mm —l—l—X— Robots & # & Notes
1 & 39 & -
1 & 36 & -
1 & 31 & -
2 & 40 & -
2 & 36 & Robots become stuck and unstuck
2 & 40 & -
3 & 36 & 2 robots stuck together throughout majority of run, trapping 4 pucks
3 & 40 & -
3 & 13 & 3 robots stuck together during later half of run, trapping 1 puck
4 & 6 & various combinations of robots stuck together throughout run
4 & 40 & various combinations stuck; all but one recover
4 & 39 & various combinations stuck; all but one recover

TABLE I: Summary of hardware trials.

Figure 8 shows the final result for the third trial with 1 robot and with 4 robots. Ideally all 40 pucks would be touching the goal region but the trial shown in Figure 8 (top) shows 9 pucks lying wholly outside this region, although they remain quite close. This pattern is consistent within the hardware trials but was not seen in the simulation results. We believe it indicates a blemish or irregularity in the projected scalar field image. Figure 8 (bottom) shows a more successful final result in that 39 pucks lie within the goal region, with one captured by the robot stuck in the corner.

Fig. 8: Final result from hardware trials for 1 robot (top) and 4 robots (bottom). These are from the third trial listed for each number of robots as given in Table I. The large white vertical bands are reflections of the overhead lights.

V Conclusions

We have presented a hardware platform and a new algorithm for the planar construction problem. Our experimental results are promising but it is clear that interference between robots is impeding performance. We attempted several different approaches to avoid collisions with other robots, but this remains a challenge for reactive robot controllers such as ours. However, we have demonstrated a feasible platform for future research in this direction and have shown that readily available components such as the Zumo, Pixy, phototransistors and a television can be combined into a working swarm robotic system to investigate solutions to the planar construction problem.

References