Decentralized Collision-Free Control of Multiple Robots in 2D and 3D Spaces

09/18/2017 ∙ by Xiaotian Yang, et al. ∙ 0

Decentralized control of robots has attracted huge research interests. However, some of the research used unrealistic assumptions without collision avoidance. This report focuses on the collision-free control for multiple robots in both complete coverage and search tasks in 2D and 3D areas which are arbitrary unknown. All algorithms are decentralized as robots have limited abilities and they are mathematically proved. The report starts with the grid selection in the two tasks. Grid patterns simplify the representation of the area and robots only need to move straightly between neighbor vertices. For the 100 triangular grid is proposed. For the complete coverage ignoring the boundary effect, the grid with the fewest vertices is calculated in every situation for both 2D and 3D areas. The second part is for the complete coverage in 2D and 3D areas. A decentralized collision-free algorithm with the above selected grid is presented driving robots to sections which are furthest from the reference point. The area can be static or expanding, and the algorithm is simulated in MATLAB. Thirdly, three grid-based decentralized random algorithms with collision avoidance are provided to search targets in 2D or 3D areas. The number of targets can be known or unknown. In the first algorithm, robots choose vacant neighbors randomly with priorities on unvisited ones while the second one adds the repulsive force to disperse robots if they are close. In the third algorithm, if surrounded by visited vertices, the robot will use the breadth-first search algorithm to go to one of the nearest unvisited vertices via the grid. The second search algorithm is verified on Pioneer 3-DX robots. The general way to generate the formula to estimate the search time is demonstrated. Algorithms are compared with five other algorithms in MATLAB to show their effectiveness.



There are no comments yet.


page 33

page 34

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1.1 Multiple Robots

When finishing some tasks using a single robot, the robot usually needs to have strong abilities with advanced sensors and complex algorithm. One kind of popular ground robots in research is the quadruped robot such as BigDog designed by Boston Dynamics [155] and Mountainous Bionic Quadruped Robot from China North Industries Corporation [160]. These robots are designed for rough-terrain with different obstacles which may be too steep, muddy and snowy to use common wheeled robots. They are also stable with heavy loads and external interference such as pushes from different directions. Correspondingly, those robots need various sensors for the environments, complicated computing and power systems, advanced actuators and sophisticated dynamic controllers. For UAVs, the Parrot AR. Drone has been widely used in research and education [91]. In [162], a trajectory tracking and 3D positioning controller was designed for this drone using its onboard video camera to building the map of the task area. A single robot path planning algorithm for complete coverage using this robot is developed in [52]. A famous UUV used in search of Malaysia Airline MF370 is the Bluefin-21 robot [21]. Some other robots in research can be found in the survey paper [96].

However, there are many tasks which cannot be done by a single robot. For the time-sensitive works, such as search and rescue after a disaster or the detection of dangerous leaking biological, chemical, and radioactive materials, a single robot may be unable to finish the task on time. For the coverage or data collection task in a vast area, only one robot would not be able to have such a large sensing range, and a larger sensing range means more expensive equipment should be used. Sometimes, a task is designed for multiple members such as the soccer game and the targets tracking mission with multiple aims moving in different directions where one robot is not enough to follow each of them. Thus, the cooperation of multiple robots is required. The recent developments in Micro-electro-mechanical systems, integrated circuits, wireless communication, as well as the swift progress in algorithm innovation also attracts researchers to focus on the control problem of the robots network. As cheaper robots [5, 81, 152, 72] with simple functions are available [32] to finish a task by cooperation, an expensive robot with advanced capability may be unnecessary.

A system with multiple robots sometimes may be called as a Wireless Mobile Sensor Network (WMSN) in some papers such as [128, 138, 99, 41, 42, 39, 184, 170, 164, 40]. In this report, both of them refer to a group of autonomous mobile robots which carry sensors to detect targets or cover an area. The energy of the robots comes from its battery. Robots can also build the local network to communicate with neighbors to share information. Based on those detected or received information, a robot moves dynamically to finish a task based on the control algorithm. As a considerable number of robots can be used, each of them should have a reasonable price, a carry-on battery, limited sensing ranges for obstacles and targets, and a limited communication range.

The robots in a team do not always need to be homogeneous. They can be heterogeneous as in the algorithms and experiments in [183, 76, 35, 110]. Due to the complexity of the tasks, especially search and rescue, and the limited abilities of each robot, members of a multi-robot system can have different functions. For example, the HELIOS team with five robots were proposed by Guarnieri et al. for the search and rescue task [66]. In the system, only three robots have laser range finders and cameras to create the virtual 3D map of the task area. The other two robots with manipulators are used to execute the special tasks including handling objects and opening doors. Sugiyama et al. [189, 190] developed the chain network in a rescue system with multiple robots. The system consists of a base station, some search robots and some relay robots. Different robots have their behavior algorithms to form the chain network. The method of the classification and the behavior algorithm is based on the forwarding table of each robot constructed for ad-hoc networking. Then the robots recognized as relay robots will act as information bridges, and frontier robots will search the area. Luo et al. [27] exploited a team with both ground and aerial vehicles for search and rescue tasks. The environment mapping was executed by the ground vehicle. The search and localization were finished simultaneously by the micro aerial vehicle with a vertical camera and a horizontal camera. Another two micro ground vehicles equipped with color sensors, sonar, and compasses were used as the backup team. [35] proposed a hierarchical system consisting of managers with advantages in computation and worker with limited ability. This system allows the processing resources to be shared and global environment to be divided into local sections for the workers. Worker robots can be separated into planners, explorers or robots with both jobs based on their ability to only deal with their assigned tasks. Algorithms in this report only require the robots to satisfy certain assumptions about their sensing ranges, the communication range, and the maximum swing radius but the robots can be different. In search tasks, all robots are doing the same task without further actions on the found targets so there is no need to separate robots to distinct roles.

Multiple robots have many advantages comparing to a single robot [219, 161, 59, 152, 214, 11, 12]. Firstly, a system with multiple robots is more efficient as discussed above because they can share information with neighbors so that they can cooperate to know the environment faster and better. Multiple cooperative robots can be separated spatially and temporally. Therefore, they can fulfill the complex, decomposable task efficiently such as cleaning floor and surveillance. Correspondingly, it may save the total consumed energy. Secondly, it is reliable as more robots provide more redundancy than a single robot. If one robot is not working, other robots could finish the task if they use an independent algorithm. Through communication, the robot failure may even be detected. Thirdly, the system is robust to errors because the information of one robot can be judged or rectified using information from other robots. Also, the system can be flexible to different tasks with different areas maybe by reassigning the task for each robot. Finally, a robot team can be scalable with various numbers of members to satisfy the task requirement, but the expense of robots need to be considered.

While using a group of robots, many algorithms for a single robot may not be available, and it is challenging to design the algorithm in different aspects. The later sections will discuss them one by one.

1.2 Centralized vs. Decentralized Algorithms

In the robot teamwork, communication and control methods are significant and complex topics. The algorithms for multiple robots can be divided into centralized algorithms and decentralized algorithms. In centralized control tasks such as [100, 15, 16], the communication center or the leader robot collects and distributes the information to all the others, which is relatively easy to control and to design algorithms. Thus, the leader robot has a larger memory and stronger communication ability which will be expensive. However, on this occasion, global information such as the Global Positioning System (GPS) data may be needed for synchronization and localization. This method also requires all robots to have good communication ability. However, those demands may be unrealistic in some occasions such as in caves, under water and in areas without control such as drones in enemy’s airspace. In another type of centralized control method, the leader-follower algorithm [45, 75, 67, 68], robots need to keep a very restrictive type of communication topology [48, 53], namely, they need to stay in the communication range of each other. In the rescue network of [189], communication chains must be continuously existing from the base station through the relay path to the distant rescue robots to reconnoiter disaster areas, and those chains must be transformed to a suitable topology if the target area of exploration is changed. Some other papers assumed that the communication graph is minimally rigid [92, 209] or time-invariant and connected [122]. Thus, they are used in formation control or achieving some consensus variables such as the same speed, the same direction, but they cannot work independently in a large area. A deadly disadvantage of the centralized scheme is that if the leader robot is broken, the whole system will stop working. Therefore, the centralized control and communication algorithms have limited applications and lead researcher to the decentralized methods.

Decentralized control of multiple robots is an attractive and challenging area [138, 6, 137, 82, 196, 125], however, it provides the system with benefits in robustness, flexibility, reliability, and scalability according to Zhu [232]. Specifically speaking, this problem describes a group of dynamically decoupled robots, namely, each robot is independent and will not be affected by others directly according to Savkin [163]. Research in this area is largely inspired by the animal aggregation problem which belongs to ecology and evolution branches of biology. It is believed by Shaw [180], Okubo [139] and Flierl [58] et al., that some simple local coordination and control laws were exploited to achieve the highly intelligent aggregation behavior by a group of bees, a school of fish and a swarm of bacteria. Those laws may be used in formation control. Described by [193], efficient foraging strategy of animal can be used in a coverage and search problem for animals. The evolutionary process through the natural selection led to highly efficient and optimal foraging strategies even with physical and biological constraints. It can be seen in the method that lobsters used to localize and track odor plumes or in the bacterial chemotaxis mechanism used by e-coli to response the nutrition concentration gradient. Similarly, robots only need a local communication range and small sensing ranges which are achievable for many cheap robots. Also, less communication is necessary compared to the centralized method in which gathering information from and sending tasks to each terminal need much time. Moreover, in some steps, using a decentralized algorithm, different members of the team can execute their missions in parallel to improve the efficiency of the work. Both centralized and decentralized communications are used in this report to improve the efficiency of the search or coverage task as well as to save the total energy. The centralized communication is only used to deliver the significant information such as the positions of targets or the positions of the failed robots, which is like the usage of the flare gun for delivering distress signal in the real rescue scenarios. Decentralized control and communication is the main part of the algorithms in this report to save time and money and thus, making the algorithms more applicable. Initially, the information of the environment is unknown. Then a robot will gradually record its sensed map and exchange the map with neighbor robots that it can communicate with. Thus, the robots only use local information to make the decision and will have different neighbors in later steps [82, 125, 11, 12, 42, 170, 176, 77].

In literature from the perspective of control engineering, ideal communication situations were usually considered. However, research in the area of telecommunication discussed the non-ideal communication including the transmission delay due to the limited bandwidth, the propagation delay because of the long transmission distance or the information loss caused by inference especially in a battlefield where severe interference is generated by the enemy. [140]

considered the multi-agent system with communication time-delays. It used switching topology and undirected networks. Then a direct connection between the robustness margin to time-delays and the maximum eigenvalue of the network topology is provided. In

[68] with the leader-follower structure, the state of the considered leader not only kept changing but also might not be measured. Thus, a state estimation method for getting the leader’s velocity is proposed. In systems with communication constraints such as limited capacity in communication channels, estimation of the necessary information are needed using methods of Kalman state estimation in [167, 102, 171, 172, 146, 143, 147, 115, 169]. Other hardware limitations may include the insufficient memory for a large global map. [34] solved this problem by describing the global map by local maps which was explored by a two-tiered A* algorithm. This algorithm can be executed entirely on the robots with limited memory. As the energy of the mobile robots is limited [98], [99] researched the area covered during the cover procedure instead of the final result. It considered coverage rate at a specific time instant or during a time interval. It also showed how long a location is covered and uncovered. Then a relation between detection time for a randomly located target and parameters of the hardware were given to help a user to decide the number of robots to use. Ideal communication is assumed in this work and propagation delay would not happen as local communication is fast enough to ignore that delay. The problem caused by the limited bandwidth is also solved by designing a suitable mapping method to minimize the information needed for the area. The memory is considered as sufficient and the search time with different parameters is discussed.

1.3 Complete Coverage and Search

Popular tasks for cooperation of multiple robots include coverage control, search and rescue, formation control [176, 177, 201, 6, 154], flocking [196, 197, 198, 159], and consensus [57, 74, 140]. This report focuses on the complete coverage tasks, and search and rescue tasks. Coverage problems were defined differently by various researchers [17, 215, 26, 230] from their research interests which are lack of generality. It could be concluded as going through a target area by utilizing the sensors on the robots and achieving the requirements as consuming the least time, going through the shortest path or having the least uncovered area. As defined by Gage [62], there are three types of coverage problem namely, barrier coverage [15, 16, 41, 40], sweep coverage [178, 40, 43] and blanket coverage [170, 42, 168, 224, 226]. Another two coverage types are dynamic coverage [99] and search. Barrier coverage means the robots are arranged as a barrier, and it will detect the invader when they pass through the barrier. Sweep coverage can be thought as a moving barrier to cover a particular area which maximizes the detections in a time slot as well as minimizes the missed detections per area. Blanket coverage is the most difficult one. It aims at deploying robots in the whole area so that each point of the area is covered by at least one robot at any time [170] in which the selected deployment locations are designed to minimize the uncovered area and the deployment procedure should be as short as possible to reach the blanket coverage early. In the three-dimensional area, the term is not suitable for the same coverage problem and is called complete coverage or complete sensing coverage in general for both types of areas.

For complete coverage, there are two types of self-deployment strategies based on the survey [228] which are the static deployment and the dynamic deployment. In the static deployment, robots move before the network start to operate. However, the deployment result such as the coverage rate or the connectivity of the network is assessed under the fixed network structure, or the assessment is irrelevant to the state of the network. For the dynamic one, the condition of the network may change such as the load balance of the nodes leading to the repositioning of the mobile robotic sensors [179]. The complete coverage in this report firstly considered 100% coverage between nodes for a fixed area as the static deployment. Then the dynamic self-deployment in an extending area, such as the demining area in the battlefield which could be enlarged, is considered by a simple modification of the static method. Some deployment methods deployed all robotic sensors manually [84], however, the cost is high if the area is large, so the applications are usually in an indoor area as in [93, 148, 24]. Moreover, in some applications in a dangerous or unknown environment, such as demining, human actions are impossible. Thus, in some papers, mobile sensors were randomly scattered from helicopters, clustered bombs or grenade launchers with assumptions of the distribution pattern [228]

such as the uniform distribution in

[195], the simple diffusion, and the R random in [80].

Dynamic coverage in [99] is similar to a search task. They both describe the coverage problem with continuously moving sensors to find targets. Movements of sensors are required because, in a vast area, blanket coverage is not realistic as it needs a significant number of robots which is expensive. Therefore an algorithm should be designed which uses only a selected number of robots to cover the unknown area completely dynamically. Thus, a section of the area will be covered sometimes and be uncovered in other time slots. However, the dynamic coverage ensures that each point is visited at least once through the whole search procedure. Note that in dynamic coverage or search, mobile sensors do not need to move together in a formation like in the sweep coverage. Note that, a search task may also be named as an area exploration task [126, 190, 17, 25, 35, 76, 108] in the mapping related topics or be called as a foraging strategy for animals [185, 149].

In both complete coverage and search problems, some papers used the assumption about the boundary effect [221, 224, 3, 4]. The assumption claims that the length, width (and height if it is a 3D area) of the search area is much larger than the sensing range so that the gaps between the sensed area and the boundary are small enough to be ignored so targets would not appear in those gaps [3]. However, some other papers ignored this assumption but still claimed they reached the complete coverage or a complete search [223, 132, 131, 128, 129, 130, 133, 12, 10, 14, 11]. Those papers could only be correct by having assumptions about the shape of the boundary and passage width. For example, the boundary is simple and smooth to allow the equilateral triangle grid to stay inside the border. In this report, complete coverage without exceeding the board is considered by either providing some assumptions about the shape of the area [225, 222, 226] or employing the assumption for the boundary effect [221, 224].

Current research about coverage and search were mainly in 2D areas which had a relatively easy geographical feature and can be applied to the plane ground and the surface of the water. However, there are a considerable number of applications in 3D terrains, such as detection of the location of the forest fire [30], underwater search and rescue [110], oceanographic data collection [211] ocean sampling, ocean resource exploration, tactical surveillance [2, 150, 157] and hurricane tracking and warning [44, 157]. The problems in the 3D area bring challenges in many aspects [55]. The area of interest in 3D spaces, such as underwater area or airspace, is usually huge and the telecommunication and sensing ranges need to be improved from circular ranges to spherical ranges which may be solved by attaching more sensors on robots or using better sensors. The mobility of robots needs to have more Degrees of Freedom (DOF), which requires entirely different mechanical structure as well as advanced controllers and sensors. Thus, tasks in 3D spaces require more expensive hardware on autonomous robots with higher mobility [210]. For the connectivity and coverage in algorithms, some comprehensively researched tasks in 2D areas are still waiting to be proved in 3D areas which will be discussed in detail in the next section. The complex system and task in 3D search and complete coverage lead to extra computational complexity and larger memory space needed. Some algorithms in 2D may not be applied to 3D tasks with simple extensions or generalizations [79]. However, several work researched the coverage [216, 145, 213, 3, 128, 224, 111, 129, 133, 55, 79, 150], search [132, 131] and trajectory tracking [162] problems in 3D areas [130]. Those algorithms can be used in the flight in a 3D terrain [162, 52, 188, 208] and the exploration in underwater environment for rescue tasks [110, 16, 182, 194, 213, 3]. Inspired by the search for MH370 in the underwater environment, this report develops algorithms in a 2D area first and modifies them to be suitable to be applied on a 3D area.

In complete coverage and search, the ranges of sensors are critical parameters. There are various assumptions for the sensing ability of sensors [207]. Most papers [223, 132, 131, 128, 129, 130, 133, 12, 10, 14, 11, 221, 224, 221, 224] assumed a fixed circular sensing range in 2D tasks and a spherical range in 3D tasks. However, some papers used a directional sensing range especially for some surveillance tasks using cameras [55, 101, 56, 199]. Reference [153, 36] considered the sensor with a flexible sensing range so that the combination of different ranges may reach the maximum coverage rate and save the energy. [123] examined the adjustment for both the direction and the range. [9] discussed the situation that sensors are positioned with only approximate accuracy because of practical difficulties and provided the method to set (near-)optimal radius or errors allowed in the placement. Boukerche and Fei [22] used irregular simple polygon assumption for the sensing range and solved the problem in recognizing the completely covered sensors and finding holes. In this report, all the sensors for both targets and obstacles are ideal and omnidirectional.

1.4 Mapping

Geometric maps [20] and topological maps [215, 106] are two kinds of methods for robots to describe the environment. Although the geometric map is more accurate containing detailed information of every point of the area, it needs a significant amount of memory and considerable time for data processing, especially for data fusion which deals with a huge amount of data collected from other robots. This kind of work can be seen in navigation using Simultaneous Localization And Mapping (SLAM) such as [20, 89, 95]. In contrast, the topological map only needs certain points of the area which is more efficient and applicable. To reduce the memory load, simplify the control law and save energy in calculations, a topological map should be used.

Grid map is a kind of topological map. There are three common ways to generate the grid map from literature. The first way to get the topological map is generating a Voronoi tessellation [213, 184, 23, 124, 49]. According to Aurenhammer [7] and Wang [207], a Voronoi diagram with sensors form to in an area can be defined as a separation of the plane into subsection with one sensor in each subarea. In a subarea, the distance from the sensor to a point should be smaller than the distance from any other sensor to that point. If the subsections of two sensors have a common border, these two sensors are called neighbor sensors. Voronoi cells are usually used in sensor deployment to reach a complete coverage. However, with initial random deployment, the mobile sensors may not be able to communicate with others due to a limited communication range. Another cell division can be seen in [33] which used the square grid map. The robots go from the center of one cell to another cell and visited cells are remembered. This square grid representation is usually used in the graph search and the tree search algorithms see, e.g., [126]. The third way to generate the coverage grid is based on uniform tessellation in 2D area and uniform honeycomb in 3D space.

The third method is used in this report thus it is discussed in detail. Using this approach, robots will move between the vertices of the 2D grid or the center of the 3D cells in each step to search the area or reach the complete coverage. Therefore, only the coordinates and visitation states of the vertices need to be included in the map which is simpler than the grid generated by Voronoi partition. The uniform division is used as each robot has the same ability so each cell a robot occupied in complete coverage or visited in search needs to be the same to maximize the available sensing range. It also simplifies the algorithm design as each step has a similar pattern. To cover a 2D area with the grid, a topological map can be made using an equilateral triangular (T) grid, a square (S) grid or a regular hexagonal (H) grid. [87] proposed that the T grid pattern is asymptotically optimal as it needs the fewest sensors to achieve the complete coverage of an arbitrary unknown area. With collisions allowed and the assumption that robots have no volume, Baranzadeh [11, 10, 14, 12] used the T grid which contains the fewest vertices as the communication range equals times the sensing range. However, in the simulation of those papers, that relation was not always used. In fact, when considering collision avoidance, the volume of the robot and the obstacle sensing range, the T grid could not always contain the fewest vertices in different conditions and could only guarantee the complete coverage with the assumption of the boundary effect [222, 225].

Unlike the proved result in 2D coverage, there is no proven conclusion to shown which grid pattern could use the smallest number of sensors to cover a 3D space completely. Coverage related works in 3D include Kelvin’s conjecture [200] for finding the space-filling polyhedron which has the largest isoperimetric quotient and Kepler’s conjecture [69] for sphere packing in cuboid area [213]. Based on proposed cells in these two conjecture and considering uniform honeycomb, truncated octahedral (TO) cells, rhombic dodecahedral (RD) cells, hexagonal prismatic (HP) cells, and cubic (C) cells could be used, and the honeycomb with the minimum number of cells under different relations between the communication range and the target sensing range is given in [3, 4]. Then [131, 132, 128, 129, 130, 133] chose the TO grid based on their common assumption of the large ratio of the communication range to the target sensing range. However, [129] did not provide the ratio and all these paper ignore collisions. In this report, to guarantee a strict 100% 2D complete coverage of the task area, only the T grid could be used with assumptions about the passage, the relations between different parameters and the curvature of the boundaries based on [222, 225]. When a complete coverage is not required and the assumption of the boundary effect is used, the way to choose the suitable grid under different situations for both 2D and 3D areas is discussed. In simulations, the grid pattern is chosen based on the parameters of the potential test robots so that the simulation results can be applied and verified directly in experiments.

When designing the grid map, there are some papers considering the connectivity of the network [64, 213, 3, 4, 8, 231]. A mobile sensor may need to have more than one communication neighbors in case some sensors fail. Thus, the extra communication neighbors could provide redundancy of the network and improve the reliability. However, some papers only considered the general space-filling problem with the assumption of boundary effect but with no obstacles [3, 4, 8]. The area could be a convex polygon in 2D areas or a convex polyhedron in 3D spaces [64, 231] as the assumption of passage width and the volume of robots are not given. [213] limited the shape of the area further to a cuboid based on the sphere packing problem. So, in a real task area with irregular shape with obstacles inside, their methods for multi-connectivity is invalid. In this report, a realistic unknown area is considered, one sensor only needs to have at least one communication neighbor.

There are also some algorithms without map especially the algorithm inspired by animals [71, 205, 227, 135, 19]

. Animal foragers need to explore the area by only detecting the randomly located objects in the limited vicinity using the random walk. The best statistical strategy of optimizing the random search depends on the probability distribution of the flight length given by

[29, 204, 149, 135, 19]. When the targets are sparsely and randomly distributed, the optimum strategy is the Lévy flight (LF) random walk which can be seen in marine predators, fruit flies, and honey bees [136]

. It uses the Lévy distribution to generate the step length and the normal distribution to generate the rotation angle so that robots have fewer chances to go back to previously visited sites

[193, 38, 186, 63, 85, 33, 28]. Thus, robots could move freely and detect all points in the task area. This scheme means the Lévy flight algorithm does not need a map to guarantee a complete search of the area so that exchanging information on the visited areas is also unnecessary and only the positions of visited targets need to be communicated. This algorithm, on the one hand, largely decreased the demand for memory and on the other hand, is a kind of a waste of the communication ability. If the number of targets is unknown, the Lévy flight algorithm could not be used as robots will not know when the area is completed searched as no map are recorded. Reference [132, 131] considered a combination of the Lévy flight algorithm and the TO grid for both static target and moving target. In those two papers, the step lengths were some fixed distances from one vertex to any other vertex, and the directions were the corresponding direction from current vertex to the destination vertex. The combined algorithm decreased the search time comparing to each single algorithm. However, the algorithm only considered the situation that the number of targets is known but did not consider obstacle avoidance, so the search area was simple. The algorithm also did not claim the sensing range for the boundary thus how to reject the long step which would exceed the boundary is unknown. This report will compare the proposed algorithm with animal inspired algorithms in [193, 192] and always consider collision avoidance. The algorithm is available for both when the number of targets is given and when it is unknown.

Some current algorithms in coverage problems were heuristic

[33, 123, 37] but there were still several papers with grid maps having rigorous proofs [43, 39]. [170] achieved 100% coverage of an unknown 2D area with a mathematical proof, but it exceeded the boundary of the area which only has limited applications such as using the UAV in open airspaces to cover the ground or the sea surface. If there are solid walls or coastlines and ground robots are used, this algorithm could not work. If the boundary of the area is a borderline in a military application, exceeding the boundary is also forbidden. Although [11] used a T grid inside the boundary, the routes of robots still exceeded the boundary which can be seen in its simulation figures. The reason for this problem is that the author thought the assumption that all accessible vertices are connected equals to the assumption that accessible vertices can be reached by moving along the shortest path, however, there may be no vertices to go through in that path. Even if there were vertices in the shortest passage, they might be too close to the boundary considering the volume of the robots. Thus, those vertices should be thought as blocked so that robots should choose a longer path. If two vertices are reached from two unconnected edges, the section between the two vertices may still be undetectable because of the short sensing range which is only set to be greater than zero in [11]. This report proves the convergence of all the proposed algorithms without asking robots to go beyond the boundary.

1.5 Collision-Free Navigation

For a multi-robot system, the increased number of robots leads to a higher possibility for robots to encounter each other also rises. However, some algorithms for single robot navigation are no longer available. Therefore, collision avoidance is a significant and attractive research topic in multi-robot systems.

Navigation strategies can be categorized into global methods and local methods. The global one is off-line path planning. Given the global information about the environment and obstacles as a priori, the robots will build a model of the scenario [166] and find an optimal traceable path to solve the problem and avoid collisions [206]. If a path is known, an appropriate velocity profile will be planned for collision avoidance [144]. This problem can be thought as resource allocation thus corresponding algorithms can be utilized. Although this method is robust to deadlock situation, the disadvantages of it are clear that the environment of the system should be known and be structured for the algorithm to segment it to separated pieces [77]. These disadvantages lead to a centralized thought of the problem and cannot be conveniently and easily modified to a decentralized approach. [142] proposed an algorithm in a known area. It used ring-shaped local potential fields to have a path-following control of a nonholonomic robot. In some situations, a part of the information is given. For instance, the edge of the area in [220] was known and in [128, 133], the equation to describe the covered area was given. [46] provided semantic information to assist the exploration. In many path planning and navigation problems like [51], the limited knowledge was given which was the direction from the starting point to the target. This is possible in a workspace or factory floor by installing radio beacons at the target so that the robot can track the beacon to plan the path.

For the local or reactive method, algorithms are based on sensed and shared local information. The drawback of some algorithms with this approach is that they are not based on the kinematic model with realistic constraints of the robot. Instead, they used heuristic methods. Without justification and mathematical proof, some methods may fail in special circumstance. For example, the control algorithm in [61] would work for two or more robots under the conditional that the robot had a constant speed. If the acceleration was limited for a holonomic robot, a potential field approach could support the control of three robots at most based on [73]. To control an unlimited number of robots with no collision, [187] used robots with unbounded velocity and [202] assigned the velocity of the robot as the control input. In reactive methods, the artificial potential field algorithm is widely used [18, 182, 54, 109, 158, 142, 73] as it generates a safe path to the target with little computational effort. It generates a gradient field between robots and targets to direct the movement of robots [181]. In that field, goals like the targets generate the attractive force and act like valleys in the field. Obstacles and boundaries generate the repulsive force, acting like peaks in the field. So, the superposition of the two fields will drive robots to go downhill towards targets while moving away from obstacles. One disadvantage of the algorithm is that robots can be trapped in local minima which results in the failure of the navigation task. Therefore, [193, 192] only used the repulsive force between robots to disperse them to move in different directions, which is also utilized in this report based on [222, 225, 221].

Examples in the above two paragraphs were all considering steady obstacles. More methods in this category include dynamic window [60], the lane curvature [90] and the tangent graph based navigation [166]. For dynamic obstacles including moving and deforming obstacles, it is tough to solve. Thus, many methods put constraints on the moving velocity of obstacles or the changing speed of shapes. The obstacles were usually explained as rigid bodies with a selected shape. Moreover, in [166], the characteristic point of the robots in concern with its global geometry is calculated, and velocity of the obstacles is achieved. A local method with range-only measurements in [121] allowed the shape of obstacles to be a time-vary variable, but it had a fierce limitation on the robots’ velocity. Inspired by binary interaction model, a reactive approach with integrated sensed environment presentation was illustrated in [166]. This method did not require approximation of the shapes of obstacles and velocity information of obstacles. It would find a path between obstacles without separating obstacles. In the deployment and complete coverage part of this report, local navigation is needed. With a grid pattern, collision avoidance is easier. In the beginning, robots only know their initial positions are within the search area without other knowledge about the area. However, while moving, the robots are generating their maps about the visited areas so that the global method could also be used.

A large percentage of research in coverage, search, formation, and consensus did not consider the problem of collision avoidance such as [11, 170, 132, 131, 128, 129, 130, 133]. Most of them used simulations to verify the algorithms, and a robot is always assumed as a particle point without the volume. [170, 11] assumed that if more than one robots chose the same vertex to go, those robots would stay around the vertex and the vertex was thought as a parking lot. However, it did not show how to choose a parking space. Even with a parking space choosing mechanism, when the number of robots increased, there would be more robots choosing the same vertex so that the car park model is not available. Therefore, in this report, robots would not choose together, and the communication range should be designed large enough so the choice can be known by all robots which may select the same vertex in this step. Although there will be more communication cost, the robots can avoid collisions while choosing the next step instead of during moving. In [10, 14], robots could move with different distance in one step which resulted in collisions during moving. However, the velocity of the movement was not stated, and the strategy for collision avoidance was not stated clearly. Also, the proposed algorithm in [14] could not match its result in its experiment. Therefore, in this report, robots always move with the unit step length to avoid this problem.

1.6 Contribution Highlights

The main contributions of the report can be described as follows:

  1. One decentralized algorithm for the complete coverage is proposed for multiple mobile robotic sensors to deploy themselves in both 2D and 3D areas. The detailed descriptions of the coverage algorithms are presented.

  2. The report proposes three decentralized random algorithms for the search task in 2D or 3D areas. They are improved step by step.

  3. The task areas for the four algorithms are arbitrary unknown with obstacles inside. For complete coverage in a 2D area, assumptions about a large target sensing range and the curvature of boundaries are given. For other problems, if the boundary effect is ignorable, these two assumptions are not needed.

  4. All algorithms utilize grid patterns to cover the area and robots move between vertices of a grid. The methods to choose the best grid, namely, the grid with the fewest number of vertices in a 2D area and the fewest cells in a 3D area, are proposed considering the relation between different parameters. In some previous algorithms, the sensing range for obstacles was not considered.

  5. All algorithms consider collision avoidance between robots and between robots and boundaries. Also, realistic constraints of robots including the volume are set based on potential test robots. Robots choose the next step in a sequence, and they are always synchronized.

  6. Simulations in MATLAB are used to demonstrate the performance of the algorithms. Another complete coverage algorithm and other five algorithms for search tasks are simulated in the same area to show the effectiveness of the proposed algorithms.

  7. The convergence of all algorithms are mathematically proved which is different from many heuristic algorithms.

  8. The algorithms are suitable for both 2D areas and 3D areas. Only slight modification based on the grid structures are needed. So it is easy to adjust them to apply to the other dimension.

  9. For the three search algorithms, the number of targets can be known or unknown.

  10. Using the simulation result of the second search algorithm, the way to generate the mathematical relation between the search time, the size of the area and the number of robots is given. The effects of other parameters are also discussed.

  11. Experiments for the second search algorithm is conducted on Pioneer 3-DX robots to verify its performance in real situations and to help design errors of the algorithm.

1.7 Report Outline

The remainder of the report is organized as follows:

Chapter 2 describes the method to choose the best grid pattern to cover the area for the four algorithms considering related parameters of robots. It introduces the basic definitions about the task area with assumptions for complete coverage and for coverage ignoring the boundary effect. The grid pattern for potential test robots which are the Pioneer 3-DX in 2D and the Bluefin-21 in 3D are chosen. Chapter 3 provides a decentralized complete coverage algorithm for the self-deployment of multiple robots in both 2D and 3D areas without collisions. The convergence of the algorithms is proved with simulations to show its efficiency comparing to another algorithm. Then three distributed search algorithms in 2D and 3D areas are presented in Chapter 4, 5 and 6 respectively. Rigorous mathematical proofs of the convergence of these algorithms are provided, and they are all simulated in MATLAB with comparisons to other five algorithms to show the effectiveness. In Chapter 4, a decentralized random algorithm for search task is provided by using the visitation state of the neighbor vertices. The second search algorithm in Chapter 5 adds potential field algorithm on the first one to disperse robots and decrease the repeated routes. In 2D area, some factors affecting the search time are analyzed and the way to find the relation between search time and other parameters are given. Also, experiment on Pioneer 3-DX robots is used to check the performance of the algorithm. Chapter 6 adds breadth-first search algorithm on the algorithm in Chapter 4. It helps robots to jump out of the visited area by looking for the nearest unvisited vertex in their local maps. The three proposed search algorithms are also compared with each other to analyze their own merits and demerits. Chapter 7 concludes the report and provides some possible future research topics to improve or to extend the presented work.

2.1 Grids for 2D Areas

This section illustrates that only the T grid can provide a 100% complete coverage. It also proposes the general method to select the grid with the least number of vertices in a 2D area if areas near boundary can be ignored by considering the communication range, the target sensing range and the obstacle sensing range of robots. Then the rule is applied to a Pioneer 3-DX robot to see which grid is suitable for it in a search task.

The tessellation of a plane surface is tiling it with one or more geometric shapes. Each of the shapes can be called as a tile. When considering the same tiles for tessellation, three forms for regular tiling can be used which are square, equilateral triangle, and hexagon as seen in Figure 2.1. If only considers the circular sensing range for targets, this problem is the same as allocating base stations to get the maximum coverage area in telecommunication. Then based on [156], the T grid should have the least number of vertices. However, in this report, the communication range and the obstacle sensing range are also discussed. Thus, the result in [156] may no longer be available. However, [11, 170, 12, 10, 14] still used the T grid to design the algorithm and based on this pattern to find the relation between the target sensing range and the communication range. The sensing range for obstacles was set equal to the side length of the tile. However, in real applications, the search or coverage algorithms may be applied to different robots. Thus, the problem needs to be thought oppositely, namely, choosing the grid based on the parameters of robots. There are two kinds of requirements for selecting the grid. One is strict as in [170], and the other is loose which can be seen in [11, 3, 4, 131]. Next, the assumptions for different ranges are given followed by the discussion subjected to these two requirements.

(a) An equilateral triangular grid
(b) A square grid
(c) A regular hexagonal grid
Figure 2.1: Three regular tessellation patterns

2.1.1 Definitions and Assumptions

This section takes advantage of the method of definitions in [225]. Examples to illustrate the relation between ranges are given using a T grid.

Assumption 2.1.1.

The search area is a bounded, connected and Lebesgue measurable set. Robots know nothing about the area initially.

There are robots labeled as to in . For the basic tiling shape, the side length of the shape is represented as and robots can only move the distance along the side in each step from one vertex of the grid to one of the nearest neighbor vertices.

Assumption 2.1.2.

In the 2D area, all the ranges for robots are circular.

Robots have a swing radius . Let the maximum possible error containing odometry errors and measurement errors be . Then a safety radius can be defined to avoid collision.

Definition 2.1.1.

To avoid collisions with other things before move, a safe distance should include both and (see Figure 2.2). So .

Thus, vertices which are within away from the boundaries or obstacles should not be visited. In Figure 2.2, vertex2 is within of the obstacle so it cannot be reached. This means the obstacles around vertex2 need to be detected from so it will not choose to move there. Thus, sensors on robots should have a large enough sensing range for obstacles. So should satisfy the following assumption and can be seen in Figure 2.2 and Figure 2.3.

Figure 2.2: The illustration for , and
Assumption 2.1.3.


The vertices which are within of are called sensing neighbors of .

Robots are equipped with wireless communication tools with the range . Those robots within of are called communication neighbors of . The communication between robots is temporary as in an ad-hoc network so each robot can be considered equal with a local communication range. To avoid having the same choice with other robots in the same step, robots should choose in a certain order and tell the choice to neighbor robots which are one or two vertices away from it. A possible order for search tasks given in Figure 2.3 is that robots on the right side and the top of the current robot should have higher priorities. This order will be explained in Chapter 4 in detail. Considering the error , should satisfy Assumption 2.1.4.

Assumption 2.1.4.


Figure 2.3: , and the selection order in search

Now, the length of a step can be shown as an inequality:


2.1.2 The Strict Requirement

This requires robots to have a complete search through the grid. Only a T grid can be used under this circumstance with the assumptions 2.1.5 2.1.6 and 2.1.7 below.

Assumption 2.1.5.


This assumption for the sensing range for targets guarantees that areas around inaccessible vertices can still be detected. For example, in Figure 2.4, black circles have a radius of and the red line is the boundary. It can be seen that vertex is less than away from the boundary so it cannot be visited. But the Assumption 2.1.5 guarantees that the area D near can be detected by robots at vertices , and .

Figure 2.4: Curvature example for the equilateral triangular grid
Assumption 2.1.6.

The curvature of the concave part of the boundary should be smaller than or equal to .

Without this assumption, some parts of the searched area may not be detected such as section A in Figure 2.4. In that figure, circles enclose all the sensed area and section A, B and C have segments with curvatures greater than . It can be seen clearly that section A cannot be detected or visited although section B and C can be detected luckily. The reason that the S grid and the H grid are not used is that there are no such kinds of limitations to guarantee a complete coverage. For the S grid, Figure 2.5 demonstrates that the setting of curvature will be useless as there is no vertex near the intersection point of sensing ranges. Thus, a curve with any curvature could go beyond and left a section being undetected like section A. The H grid has the same problem.

Figure 2.5: Example of the curvature in the S grid
Assumption 2.1.7.

Let represents the minimum width of the passages between obstacles or between obstacles and boundaries. Then .

This assumption guarantees that there is always at least one accessible vertex in the path. Otherwise, if the passage is not large enough as in Figure 2.6, robots at will not arrive at as robot can only move in a step but vertices , , and are inaccessible, leaving a section in between undetectable.

Figure 2.6: A narrow path

2.1.3 The Loose Requirement

This only requires that all vertices are sensed because it has an assumption about the boundary effect which can be seen in [221, 224, 3, 4].

Assumption 2.1.8.

The search area is usually very large. So, is much smaller than the length and the width of area . Therefore, the boundary effect is negligible.

This assumption means that there are no targets in the area near the boundary such as the slashed area under point in Figure 2.5 so robots do not need to completely sense the area and Assumption 2.1.6 is not needed. Thus, all the three kinds of grids can be discussed. Then Assumptions 2.1.5 and 2.1.7 for and are made differently based on the structure of each grid pattern. Here, is still used to design the minimum passage width so the passage can be passed by a robot. But the usage of is to ensure that all the sections between accessible vertices are completely sensed which is different from the requirement that all sections are completely sensed in the strict requirement. Table 2.1 shows the relation between , , and and the value of represented by and for each grid pattern. The subscript of is the notation for the name of the corresponding grid.

Table 2.1: and for each grid

To use the least number of vertices in the search area, for the same , one vertex needs to have the fewest neighbor vertices so that the polygon (tile) occupied by each vertex has the maximum area. Also, the maximum should be used. [8] found the grid pattern which results in the minimum number of vertices using different / for a static coverage problem. This report considers three parameters in choosing the grid pattern, namely, , and , which will be different based on the used devices. So, the discussion regarding these three parameters for general situations is provided based on the maximum in the primary results in Inequality 2.1 and in Table 2.1. In this subsection, the result in Inequality 2.1 is labeled as and the result from Table 2.1 is labeled as . To simplify the discussion and illustrate the procedure of the calculation only, and which depend on the selected equipment are ignored while collision avoidance is still considered. So the primary results to maximize are simplified as , , and . Then the maximum can be written as . If , the area is represented by which is the same for the three grid patterns as it is a fixed parameter from the chosen robot. Otherwise, and the area is represented by which is also the same in the three grids. Table 2.2 demonstrates the area occupied by each vertex and in different conditions. and have subscripts to indicate the types of the grid. The data in the table have three colors representing the ranking of the values with red for the large value, orange for the medium value and cyan for the small one. According to this table, if , the T grid should be used to have the least number of vertices. If , the S grid should be used. An H grid will be used when . Although larger leads to fewer vertices and less search time, it will increase and thus will shrink the scope of applicability.

Table 2.2: and the area occupied by each polygon

2.1.4 The Choice for the Pioneer 3-DX Robot

Pioneer 3-DX robots in Figure 2.7 is widely used in research. The second search algorithm in this report also uses it to verify the performance. Therefore, this robot is introduced and the suitable grid pattern for it will be selected. Based on the above discussion, for a complete coverage, a T grid must be used, so only the grid pattern under the loose requirement need to be chosen based on the parameters of this robot. The robot has 16 sonars around which is the yellow circle in the figure. The layout of the 8 sonars in the front can be seen in Figure 2.8. There are other 8 sonars at the symmetrical rear side of the robots. Thus, the robots can sense the environment through 360 degrees. The sonar is used to sense obstacles with . The target is sensed by the laser which is the cyan part in Figure 2.7. The laser model is SICK-200 which has for a range of 180 degree which is illustrated in Figure 2.9. To detect 360 degrees, the robot needs to turn around after the detection of one side is finished. For communication, the robot uses a Cisco Aironet 2.4 GHz Articulated Dipole Antenna AIR-ANT4941 with at 1Mbps typically in an indoor environment. In calculations, equality conditions of the inequalities are used. Therefore, the maximum as . As , the H grid should be used. Base on Table 2.2, this belongs to the last (7th) situation. From Table 2.1, m. So the side length of the H grid is m. Then m.

Figure 2.7: The Pioneer 3-DX robot
Figure 2.8: Layout of 8 sonars in the front
Figure 2.9: Range of the laser SICK-200

2.2 Grids for 3D Areas

This section proposes the general method to select the grid pattern to cover a 3D area with the fewest vertices. Different relations between the communication range, the target sensing range, and the obstacle sensing range are discussed in a Table. Then the suitable grid for the UUV, Bluefin-21, will be chosen using that table.

Tessellation in a 3D area is called honeycomb which is filling a 3D space with cells. As discussed in 2D area, the same cell is used to simplify the representation of the area and the control algorithm design. Honeycomb is more complicated than tessellation as there are no proven results for which grid pattern will use the fewest cells to cover a grid if the same sensing range is employed in all grid patterns. However, based on similar problems such as Kepler’s conjecture and Kelvin’s conjecture [3, 4], possible polyhedrons include the C grid, the HP grid, the RD grid and the TO grid as seen in Figure 2.10. [3, 4] also discussed which grid should be chosen under different /. Then [129] chose the TO grid without giving the communication range. However, is enough to connect all vertices. But when , an HP grid should be used. Then [131, 132, 128, 130, 133] considered the situation for . However, as collision avoidance was not discussed in detail, these papers did not have . Moreover, they were considering the question of designing the best grid pattern to cover the area and setting the ranges based on this grid. But, in reality, the search or coverage algorithms should be applied to different robots carrying devices with various ranges. So this chapter is based on the range related parameters of the robots to find the corresponding suitable grid pattern. Different from 2D problems, this section only discusses the grid selection which complies with the loose requirement.

(a) cubic
(b) hexagonal prismatic
(c) rhombic dodecahedral
(d) truncated octahedral
Figure 2.10: Four uniform honeycombs

2.2.1 Definitions and Assumptions

This section will take advantage of the method of definitions in [221]. Examples to illustrate the relations between ranges are given in a C grid.

Assumption 2.2.1.

A three-dimensional area is a bounded, connected and Lebesgue measurable set.

There are robots labeled as to in .

Definition 2.2.1.

If there are edges with different lengths in the grid and robots can visit all vertices through various sets of lengths of edges, then the maximum length of each set is compared with others, and the smallest one is chosen to be the edge length . Then robots move the distance from one vertex straightly to an accessible neighbor vertex in one step.

3D grids based on a uniform honeycomb are used, and the center of the polyhedron is the vertex of the grid. Then the grid is built up by connecting the vertices to its nearest surrounding neighbor vertices.

Assumption 2.2.2.

All sensors and communication equipment on robots have spherical ranges. The range using the word radius also refers to a spherical radius .

Let the swing radius of robots be . Then the total error of robots is represented as and the safety radius is . Then we have a similar definition as Definition 2.1.1 and two similar assumptions for the obstacle sensing range and the communication range as Assumptions 2.1.3 and 2.1.4. The only difference is that the ranges in these definitions are spherical instead of circular.

Definition 2.2.2.

so that the collisions with other things could be avoid by path planning before a move (see Figure 2.11).

Assumption 2.2.3.


Assumption 2.2.4.


Figure 2.11: and the order of selection in search.

So, the length of a step is


Vertices which are within of are called sensing neighbors of . Robots within of are called communication neighbors of . The communication in the 3D space also uses the ad-hoc network with an omnidirectional antenna to get the spherical coverage.

2.2.2 The Loose Requirement

In the loose requirement, the area between vertices must be fully covered, and the area between outer sensing spheres to the boundaries can be ignored as the following assumption is used.

Assumption 2.2.5.

The search task for a team of robots usually has a vast area. So, is much smaller than the length, the width and the height of area . Therefore, the boundary effect is negligible [3, 4].

When selecting the type of the polyhedron, the number of polyhedrons to fill the 3D space needs to be minimized, so the maximum needs to be used. For the same , one vertex needs to have the fewest sensing neighbors to allow each polyhedron to have the maximum volume. For the HP cell, there are two kinds of edges to connect vertices and robots must move along both to visit all vertices so that equals to the longer one. For a TO cell, there are also two kinds of edges, but robots can visit all vertices through the shorter one only so equals to the shorter one to maximize the volume of the polyhedron. The relation between and and listed in Table 2.3.

Table 2.3: for each 3D grid

In the 3D space, the three parameter of robots, namely, , and , are still considered in the discussion to find the grid with the fewest vertices in different situations. Thus, the results in [3, 4] which considered and only for a similar problem cannot be used. The discussion in this section uses the maximum in the primary results in Section 2.2.1 and in Table 2.3, which are represented as and here. Similarly, and are ignored as to simplify the discussion and only illustrate the procedure of calculation. These two parameters are affected by the accuracy of the equipment and the disturbance in the environment. However, collision avoidance is still considered in the algorithm design. The primary results to maximize are , , , and . Thus, the maximum . To represent the volume occupied by each polyhedral cell, the same method as in 2D is used. So when , the area is represented by which is a fixed parameter from the selected robot. Otherwise, and the area is represented by which is calculated using the other fixed parameter . Therefore, the value in the table are comparable for the four grid patterns. Table 2.4 only shows the volume with subscript to describe the shape of the polyhedron occupied by each vertex. Colored values are also used in the table to show the comparison result with red for the large value, orange for the medium value and cyan for the small one. The table shows that the C grid should be chosen when and the HP or RD grid should be employed when . For the TO grid, it is used when and to have the least number of vertices.

Table 2.4: The area occupied by each cell

2.2.3 The Choice for the Bluefin-21 Robot

Bluefin-21 is a UUV (see Figure 2.12) which carries multiple sensors and payload for tasks such as search, salvage and mine countermeasure [21]. It is famous in searching MH370 which is also the inspiration of the research of this report. In the search task, the sensor for underwater locator beacon has a target sensing range as km at least in the normal condition [86]. The sensor for obstacles is the EdgeTech 2200-M side scan sonar. It has different sensing ranges at different frequencies and resolutions. To guarantee the obstacle avoidance is successful, the minimum one m is used to ensure the accuracy. For the local communication, there is no information in the datasheet for the equipment used and for the range that it can communicate in so that the parameter of the 2D robot is used which is a Wi-Fi communication with m. In calculations, all the equality conditions of the inequalities are used. Thus, based on Inequality 2.2, the maximum m. As , the first column of Table 2.4 is used and the C grid is chosen. Then m and the corresponding assumptions for passage width is given.

Figure 2.12: Bluefin-21[21]
Assumption 2.2.6.

All passages between obstacles or between the obstacles and borders have a cross-section which can include a circle with a radius of . This can be seen in Figure 2.13

So the theoretical without is m. This guarantees that there is always at least one accessible vertex in the cross-section which is illustrated in Figure 2.13. In the figure, the robot moving from a blue vertex (x) to a red vertex (+) must go through a black vertex (o) as the robot can only move in a step. Although it is obvious that decreasing will expand the scope of applicability, the number of vertices will increase so the search time will also increase.

Figure 2.13: Passage width for a 2D area

2.3 Summary

In summary, this chapter proposed that a grid pattern should be used as the final location to deploy the Wireless Mobile Sensor Network and the representation of the area for both search and complete coverage tasks. Three regular tessellations in a 2D field and four uniform honeycombs in a 3D area were considered. Then Table 2.2 for 2D and Table 2.4 for 3D are provided as the tables of selection criteria for different relations between the communication range, the target sensing range and the obstacle sensing range. Then typical robots which are Pioneer 3-DX robots in 2D and Bluefin-21 robot in 3D were used to exemplify the selection procedure. Based on the result, later simulation and experiment will mostly use the T grid in 2D tasks and the C grid in 3D tasks.

3.1 Completer Coverage in 2D

This section provides a decentralized collision-free algorithm to deploy multiple robotic sensors in a task area with obstacles to achieve the 100% complete coverage using a T grid pattern based on Chapter 2. Thus, intruders entering the covered area can be detected. This section takes advantage of the methods for the definition and assumption in [11, 225, 168]. Some assumptions and definition about the area, ranges, and the curvature under the strict requirement have been claimed in Section 2.1.1 so they are used directly without explaining. The original work of this section comes from [225].

3.1.1 Problem Statement

Assumption 3.1.1.

There are a finite number of obstacles , ,, and obstacles are non-overlapping, closed, bounded and linearly connected for any . The obstacles are static, arbitrary, and unknown to the robots a priori.

Definition 3.1.1.

Let for all . Then represents the area that needs to be covered. Area can be static or be expanding.

In each move, a robot translates straightly from a vertex represented as of the grid to the selected vertices which is one of the nearest neighbors. Thus, the moved distance is the side length of the basic grid pattern. In , the set of accessible vertices is denoted by . If is used to represent the number of elements in set , will represent the number of accessible vertices. robots with the same sensing and movement ability are released from initial known vertices to cover . Then for the optimal deployment, let represent the number of steps needed. For the complete coverage, the number of robots prepared needs to satisfy

Assumption 3.1.2.


The set containing sensing neighbors at discrete time is denoted as and its th element is at position . denotes the set containing those communication neighbors at time . As all robots are equal, the ad-hoc network is used for communication between and robots in . These local networks are temporary and need to be rebuilt when robots arrive at new vertices. In the future tests on robots, Wi-Fi networks in ad-hoc mode will be used with Transmission Control Protocol (TCP) socket communication.

At time , the chosen position of is . A reference position is selected from the initial positions. Some definitions about distances are provided below.

Definition 3.1.2.

The distance from to is denoted as . Similarly, the distance from an accessible neighbor vertex to is and the maximum distance for all is denoted as with sensing neighbors with this value. A random choice from those values is represented by .

Assumption 3.1.3.

3.1.2 Algorithm

The algorithm includes two parts. The first part is to allocate robots to a common T grid with a common coordinate system. In the second part, a decentralized collision-free algorithm is designed to deploy robots using the grid. Initial Allocation

Before deployment, [128, 168] could only build a common grid with small errors in certain situations and one vertex can have more than one robot which means collisions are allowed. To compare to that algorithm and design a practical initialization with a common coordinate system, this chapter manually selects a few vertices of a grid near the known entrance of the area as initial positions to allocate robots and uses one point from them as . Initial positions and are set to the place where robots could have more choices in the first step based on the limited knowledge of the area after applying the algorithm so that robots could leave the initial vertices earlier without blocking each other. To allocate multiple robots at the same point in different steps, the multi-level rotary parking system as in Figure 3.1 from [141] could be used. Thus, if a robot moves away from a vertex, the empty parking place will rotate up, and another robot will be rotated down to the same start position. Another possible method is to release robots using the rocket launcher such as the tubes employed in the LOCUST project for the Coyote unmanned aircraft system [1] in Figure 3.2. As the relative positions of the tubes are fixed, the position of the robot in one tube can be set as the reference to allow all robots to have the same coordinate system and the same grid.

Figure 3.1: The multilevel rotary parking system
Figure 3.2: The launcher for multiple UAVs in the LOCUST project Distributed Algorithm

The distributed algorithm for deployment is designed based on the search algorithm in [223, 225, 222, 221]. The flow chart of the algorithm is in Figure 3.3 and the abbreviation ‘comm’ in that figure is short for communication. Step 6 and Step 7 only happen when the area is static. If the area will be extended, the algorithm only includes the loop with Step 1 to Step 5. In both situations, robots will sense intruders continuously and broadcast the positions of intruders when they are found. However, for the static area, intruder detection may only be required after the deployment phase is stopped in Step 7 in some applications.

Figure 3.3: The flow chart for the 2D complete coverage
  1. [fullwidth,itemindent=2em]

  2. Comm1, decide sequence: A robot calculates at the beginning of each loop and exchanges and with its communication neighbors. Then, the one with larger has higher priority. If more than one robot have the same , another parameter of robots needs to be used to decide the selection sequence. As only addresses, such as IP addresses, are different between robots, this chapter allows the robot with a lower IP address to choose with a higher priority, however, if the one with higher address chooses first, the result should be the same.

  3. Find robot obstacles: The positions of robots are known, so robots within of are thought as obstacles, and the occupied vertices are inaccessible for . The vertices to release robots are always considered as obstacles so robots will not choose them as the next step, and new robots can be added to the area.

  4. Wait, sense obstacles: A robot needs to wait for the choices from the robots with higher priorities because the positions of those choices can be thought as detected points in the next step and may also be obstacles for . If all robots are connected, the last one in the choosing sequence needs to wait for every other robot and the processing time for the whole selection process should be set to the value in this condition because this is the longest time and using this time, all the robots could be synchronized in each loop.

    In the turn for , it detects the objects in but avoids the section occupied by existing neighbor robots. The way to judge whether the object is an obstacle or not is illustrated in Figure 3.4. The general rule is that if there is any sensed object on the way to vertex in , that object is treated as an obstacle. So that vertex cannot be reached and is removed from . In Figure 3.4, is the distance between and the left arc where . represents the distance from the sensed object to the position of , . denotes the distance between and and denotes the distance from to the nearest edge of the grid. Let the angle between the facing direction of and the segment which connects and be . Due to the safe distance , when a robot goes straight to , it needs a related safe region which is the light brown part in the figure. When an object is detected, and are known so coordinates of that object can be calculated. Thus, and can be determined. If , point is an obstacle. If no objects are sensed, this robot will still wait for the same amount of processing time so all robots are synchronized in this step.

    Figure 3.4: Judge obstacles
  5. Choose comm2: calculates for its sensing neighbors and randomly chooses one vertex from neighbor vertices which could result in . If there are no accessible vertices to visit for , namely , will stay at its current position. Mathematically, it can be written as:


    Then, the choice will be sent to robots with lower priorities. For the static area, a robot also records and sends the IP addresses of robots which will move in the current step. After communication, the robot still waits until the time for choice past. However, robots have no knowledge about the number of robots used at time . Thus, the total time for making a choice is equal to the time for selection and communication of one robot multiples the total number of robots prepared for deployment. In this way, robots can be synchronized.

  6. Move: All robots move together and those who do not move need to wait for the time for the movement to get synchronized. The movement includes rotating the head of the robot to aim at the target vertex and translating to the vertex. All robots should move in the same pattern at the same time, and the way to move should match the realistic constraints.

    Each movement step consists of two phases: phase I is the pure rotation, phase II is the pure translation. The rotation will align the robot with the target vertex. Specially, if was facing to its next vertex before phase I, it will not move but wait for the rotation time. Then can go straight along the selected edge. Many robots use the nonholonomic model with standard hard constraints on velocity and acceleration. This report also considers the constraints of real robots so that both phases use trapezoidal velocity profiles based on parameters of the area and the test robots. In this section for 2D areas, Pioneer 3-DX robots are used, and parameters are shown in simulations (Section 3.1.3).

  7. Do all Stay? : Based on the sequence of choice, the robot at will be the last to choose the next vertex. If no IP addresses from robots which go to new vertices are received, and the choice of the robot at is also staying at the current position, namely for all robots, then the self-deployment should be stopped. Otherwise, all robots execute Step 5.

  8. Stop: The stop procedure depends on the applications. In this chapter, robots are designed to detect intruders into the area so the robot at could broadcast the stop signal, and all robots stop moving and obstacles sensing but only keep the sensors for target intruders on. If an intruder is found, robots broadcast the position of the intruder, and other corresponding actions could be taken. Specially, under the strict requirement, if the sensing range for the target is changeable, the sensors which cannot sense the boundaries could decrease their sensing range to the minimum value which can meet the loose requirement, namely, . Thus, the energy of those sensors will be saved, and they can monitor the area for a longer time.

Remark 3.1.1.

The broadcast is only used for a few vital information including the stop signal and positions of intruders, so it will not consume too much energy. If only local communication could be used, the stop signal can still be transmitted to all robots at the time slot for comm1 and receivers will do all the actions in Step 7 directly except broadcasting the stop signal.

By applying the above algorithm, robots which are further away from have higher priorities to choose to go to the furthest accessible vertices, which leaves spaces for the newly added robots near to go to. If there are enough available choices for robots, robots could be added in each loop and an optimal deployment with steps can be reached. Thus, the equation for can be written as:


In the above formula, the is the ceiling function because the result of the division may be a decimal figure. However, less than one step means not all the robots need to move in that step, but that should still be finished in one step.

Theorem 3.1.1.

For a static area, suppose all assumptions in Section 2.1.1 and 3.1.1 hold for the loose requirement or the strict requirement, and algorithm 3.1 with the judgment step is used. Then there exists such a time with probability 1 that for any vertex , the relationship holds for some and all .


The proposed Algorithm 3.1

and the judgment method in Step 6 form an absorbing Markov chain with many transient states and absorbing states. The transient states are the states that robots have accessible sensing neighbors to choose. The absorbing states are the states that all robots have no sensing neighbors to go and stop at their current vertices forever, namely the states that they cannot leave. The proposed algorithm continuously adds robots to the area to occupy every vertex, which enables robots to reach the absorbing states from any initial states with any

. This implies the absorbing state will be achieved with probability 1. This completes the proof of Theorem 3.1.1. ∎

3.1.3 Simulation Results

The proposed algorithm for the complete coverage is simulated in MATLAB2016a in an area with 20*20 vertices and as seen in Figure 3.5. are chosen to verify the results and those known initial vertices are at the bottom right corner to release robots. The numbers in the graph are the order to add vertices, and vertex 1 is the reference vertex at . In this arrangement, each robot will have vacant vertices to go without blocking each other in the first loop.

The parameters in the simulation are designed based on parameters of Pioneer 3-DX robots in Figure 2.7 and the assumptions satisfying the strict requirement. Table 3.1 shows the parameters in the simulation which are designed based on the datasheet of the pioneer 3-DX robot. Row P, R, and S are the names of the parameters, parameters for robots and parameters in simulations respectively. The underlined column names refer to the known values for robots. and need to be estimated from the experiment. Then and can be calculated in turn based on assumptions in Section 2.1.1 and . In calculations, equality conditions of the inequalities are used. In the experiment, robots will have at most a 0.08m shift to the left or right after moving 8 meters forward or backward, so is set to 0.09m and m. In an experiment with m and m/s, moving time is roughly estimated as 5.75 seconds which is 4.5s for the translation and for the rotation using trapezoidal velocity profile and the maximum rotation speed /s of the robot. In simulations, and from the robot are used. , and from the experiment are used. Although different areas with different sizes will be simulated, for simplicity, the error in the simulation is always set as so . Then other values are calculated based on assumptions in Section 2.1.1.

R 0.26m 0.35m 5m 32m 4.65m
S 0.26 0.35 1.35 1.35 1
R 91.4m 1.5m/s /s 5.75s 5.35m
S 2.09 0.3/s 300 5.75s 1.7
Table 3.1: Parameters of robots and simulation

An example progress for is shown in Figure 3.6, 3.7 and 3.8 for , and respectively and the self-deployment finished in 86 loops. The graph shows that robots tended to go to vertices which were far from and left space for newly added robots, which made the deployment fast. In Figure 3.8, the circles represent the of each robot and every point of the task area was covered at the end.

The proposed algorithm is compared to the modified version of the DR algorithm in [170, 131]. To modify it, firstly, the IP addresses of the robots are used as the order numbers to set the sequence of choice. Therefore, for and its communication neighbors, the robot with the lowest address will choose first. Then the robot needs to inform its communication neighbors with lower priorities about its choice in this loop.

Figure 3.5: The task area and the initial setting
Figure 3.6: Positions of robots when
Figure 3.7: Positions of robots when
Figure 3.8: Positions of robots when
Figure 3.9: An example of the narrow path with

Table 3.2 illustrates the calculated using formula 3.2, average results of 100 simulations for the proposed algorithm and average results of 100 simulations for the DR algorithm. It also shows the ratio of the steps in the proposed algorithm to that in the DR algorithm. It can be seen that the proposed algorithm is always better than the DR algorithm, and the advantage is clearer when is larger. Compared to , the simulation result of the proposed algorithm is equal to it when . However, for or 7, the simulation results are a little bit larger but no greater than 1 step. This result means some loops had less than robots added because less than robots found vacant vertices to go to. This can be caused by some narrow vacant areas which can be seen in the example in Figure 3.9 where and . From formula 3.2, , however, the narrow path only allows one robot to be added in each loop. So, robots need four loops to finish the complete coverage. Another reason is that can choose a whose is smaller than but other robots within of have higher priorities so they have finished choosing the next step. Thus, would be vacant in step and the movement of is towards the reference point. So it does not result in a vacant vertex near initial vertices for adding a new robot.

Type of Results
3 5 7
86 52 37
Proposed algorithm 86 52.06 37.96
DR algorithm 310.39 273.74 254.92
Proposed/DR 27.7% 19.0% 14.9%
Table 3.2: Steps from calculations and simulations with

To test the scalability of the algorithm, areas with the same shape but different sizes are used with . So areas with 10*10 vertices and 30*30 vertices are used with and respectively. The average results of the two algorithms, the ratio between them, and are shown in Table 3.3. The results demonstrate that the proposed algorithm is scalable and effective. Its advantage is clearer when the area is larger. The steps for the areas with 20*20 and 30*30 vertices is optimal, but the steps for the area with 10*10 vertices is larger than because the shrunk area has more narrow paths as discussed in Figure 3.9.

Type of Results Size of the Area
10*10 20*20 30*30
19 86 200
Proposed algorithm 23.96 86 200
DR algorithm 70.62 310.39 766.71
Proposed/DR 36.0% 27.7% 26.1%
Table 3.3: Steps from calculations and simulations for different areas with

3.1.4 Section Summary

This section provided a decentralized algorithm for multiple robots to build a robotic sensor network to completely cover an unknown task area without collisions using an equilateral triangular grid pattern. The convergence of the proposed algorithm was mathamatically proved with probability 1. Results of this algorithm were very close to the least number of steps and were equal to that in some situations. So the algorithm is optimal under the limitation of the area. Simulation results demonstrated that the proposed algorithm is effective comparing to another decentralized algorithm and it is scalable for different sizes of areas.

3.2 Completer Coverage in 3D

This section solves the complete coverage problem in a 3D area by using a decentralized collision-free algorithm for robots to form a coverage network. Based on Chapter 2, the C grid is used in this section for the Bluefin-21 robot. The assumptions and definition for the area and ranges from Section 2.2.1 are used directly. Others are similar as Section 3.1.1 except the ranges are spherical, and the shape of objects are three-dimensional which are based on [224]. However, these assumptions and definitions are still claimed here to make the explanation and the proof clear and will be used in later Chapters.

3.2.1 Problem Statement

Assumption 3.2.1.

Area A has a finite number of three-dimensional obstacles , ,, and all these obstacles are non-overlapping, closed, bounded and linearly connected for any . The obstacles are unknown to the robots.

Definition 3.2.1.

Let for all . Then the 3D area that needs to be covered can be written as . Area can be either static or be expanding.

In each step, robots move from one vertex to an accessible nearest neighbor vertex through the edge of the grid. The number of steps needed in the optimal situation for the complete coverage is denoted as . The set of accessible vertices in is denoted by and the number of members in is .

Assumption 3.2.2.

To occupy every vertex in , . Robots are fed into the area from initial vertices near the boundary.

The set of sensing neighbors of at discrete time (step ) is denoted as in which the position of the th member is . The set of communication neighbors of in time is denoted as . The communication in 3D is also employing ad-hoc network which is rebuilt after each robot reached its next vertex.

The position of at time is . A reference position which belongs to the initial positions is set. Then some definitions about distances are given.

Definition 3.2.2.

The distance from to is represented by and the distance from to is . The maximum is denoted by with neighbor vertices resulting to this value. The position of the choice from those values is represented by .

To allow robots to go through each passage to reach each vertex, the assumption for passage width is needed.

Assumption 3.2.3.


3.2.2 Algorithm

The algorithm starts with the method to set the initial locations of robots followed by the decentralized self-deployment algorithm with collision avoidance to form the complete coverage network to detect intruders. Initial Allocation

The initialization in a 3D area is similar to that in a 2D area. nearby vertices of a grid near the known entrance of the area are selected as the initial positions to allocate robots, and one of them is set as . The chosen points should allow robots to leave the initial positions without blocking each other and it is better to give a robot more choices. In practice, method in Figure 3.2 and 3.1 can be used. For the marine applications, ships or submarines could be used to carry robots and stop at the selected places to launch UUVs. Distributed Algorithm

The 3D algorithm is the same as that in a 2D area, so the flowchart in Figure 3.3 is still available. So Algorithm 3.1 and Equation 3.2 are still used here.


To apply them to the 3D area, the user needs to think all parameter in the 3D situation such as the distance in obstacle detection and setting the priority. Then the algorithm in 2D will handle the task in 3D successfully.

3.2.3 Simulation Results

The proposed algorithm is initially simulated in MATLAB2016a for a static area with 7*7*7 vertices and in Figure 3.10. In the simulation, , 5, 7, 9, 11 and 13 are used with initial vertices at the bottom layer of the task area. In the figure, numbers indicate the order to added vertices and the position of vertex 1 is selected as . In this setting, all robots have accessible sensing neighbors to go in loop one.

The parameters in the simulation are based on the Bluefin-21 underwater vehicle which was used in searching MH370 and details can be found in [21]. The data for Bluefin-21 and simulation is shown in Table 3.4 where P, R, and S stands for parameter names, parameters for robots and parameters for simulations. The underlined parameter names mean those parameters for robots are from the datasheet. As is unknown in the datasheet, the cannot be estimated. The length of the robot is 4.93m, and the diameter is 0.53m, so m is set. is calculated based on the . In the datasheet, real-time accuracy of the distance traveled. So can be set using the distance traveled in simulations. Then and are calculated based on and assumptions in Section 2.2.1. In simulations, use the data from robot directly. Then is set to avoid collisions while robots are moving. To find of the robot, the maximum travel distance in simulations in this section is used which is 155.885 as the body diagonal of the area with 8*8*8 vertices in comparison. So the corresponding and . Then other parameters in simulations can be calculated. Then m is set in the parameters for the robot. The rotation speed of this UUV is unknown, and there are no Bluefin-21 robots to test in the author’s lab. Thus, the simulation results will not be compared to the experiment result so guess an angular speed and a total move time is meaningless. So in simulations, is still set as 5.75s with 4.5s for translation and 1.25s for rotation to show the effect of the algorithm, although it will be smaller than the actual value.

An example of the procedure with is shown in Figure 3.11, 3.12 and 3.13 for , and separately and the deployment finished in 44 steps. The graphs show that robots will occupy the sections which are far from first, thus, vertices near initial positions are vacant for adding new robots. Therefore, the self-deployment result could be close to the optimal result. In Figure 3.13, the light grey spheres demonstrate the for each vertex. The result shows that only the corners near the boundaries of the area are not covered. Based on Assumption 2.2.5, those corners are ignored, and the complete coverage of the area is achieved.

Figure 3.10: The task area and the initial setting