1 Introduction
Monitoring a given set of locations over a long period of time has many applications, ranging from infrastructure inspection and data collection to surveillance for public or private safety. Technological advances have opened up the possibility to perform these tasks using autonomous robots. To deploy the robots in the most efficient manner is not easy, however, and gives rise to interesting algorithmic challenges. This is especially true when multiple robots work together in a team to perform the task.
We study the problem of finding a patrol schedule for a collection of robots that together monitor a given set of sites in a metric space, where is a fixed parameter. Each robot has the same maximum speed—from now on assumed to be unit speed—and each site has a weight. The goal is to minimize the maximum weighted latency of any site. Here the latency of a site is defined as the maximum time duration between consecutive visits of that site (multiplied by its weight). A patrol schedule specifies for each robot its starting position and an infinitely long schedule describes how the robot moves over time from site to site.
Related Work. If and all sites have the same weight, the problem reduces to the Traveling Salesman Problem (TSP) because then the optimal patrol schedule is to have the robot repeatedly traverse an optimal TSP tour. Since TSP is NPhard even in Euclidean space [papadimitriou1977euclidean], this means our problem is NPhard for sites in Euclidean space as well. There are efficient approximation algorithms for TSP, namely, a approximation for metric TSP [christofides1976worst] and a polynomialtime approximation scheme (PTAS) for Euclidean TSP [arora1998polynomial, mitchell], which carry over to the patrolling problem for the case where and all sites are of the same weight.
Alamdari et al. [alamdari2014persistent] considered the problem with one robot (i.e., ) and sites of possibly different weights. It can then be profitable to deviate from a TSP tour by visiting heavyweight sites more often than lowweight sites. Alamdari et al. provided algorithms for general graphs with either or approximation ratio, where is the number of sites and is the ratio of the maximum and the minimum weight.
For and even for sites of uniform weights, the problem is significantly harder than for a single robot, since it requires careful coordination of the schedules of the individual robots. The problem for has been studied in the robotics literature under various names, including continuous sweep coverage, patrolling, persistent surveillance, and persistent monitoring [Elmaliach:2008:RMF:1402383.1402397, 6094844, yang2019patrol, liujointinfocom2017, 6106761, 6042503]. The dual problem has been studied by Asghar et al. [asghar2019multi] and Drucker et al. [drucker2016cyclic], where each site has a latency constraint and the objective is to minimize the number of robots to satisfy the constraint among all sites. They provide a approximation algorithm where is the ratio of the maximum and the minimum latency constraints. When the objective is to minimize the latency, despite all the works for practical settings, we are not aware of any papers that provide worstcase analysis. There are, however, several closely related problems that have been studied from a theoretical perspective.
The general family of vehicle routing problems (VRP) [dantzig1959truck] asks for tours, for a given , that start from a given depot such that all customers’ requirements and operational constraints are satisfied and the global transportation cost is minimized. There are many different formulations of the problem, such as time window constraints in pickup and delivery, variation in travel time and vehicle load, or penalties for low quality services; see the monographs by Golden et al. [golden2008vehicle] or Tóth and Vigo [toth2002vehicle] for surveys.
In particular, the path cover problem aims to find a collection of paths that cover the vertex set of the given graph such that the maximum length of the paths is minimized. It has a approximation algorithm [arkin2006approximations]. The minmax tree cover problem is to cover all the sites with trees such that the maximum length of the trees is minimized. Arkin et al. [arkin2006approximations] proposed a approximation algorithm for this problem, which was improved to a approximation by Kahni and Salavatipour [khani2014improved] and to a approximation by Xu et al. [xu2013approximation]. The cycle cover problem asks for cycles (instead of paths or trees) to cover all sites. For minimizing the maximum cycle length, there is an algorithm with an approximation factor of [xu2013approximation]. For minimizing the sum of all cycle lengths, there is a approximation for the metric setting and a PTAS in the Euclidean setting [khachai2015polynomial, khachay2016polynomial]. Note that all problems above ask for tours visiting each site once (or at most once), while our patrolling problem asks for schedules where each site is visited infinitely often.
When the patrol tours are given (and the robots may have different speeds), the scheduling problem is termed the Fence Patrolling Problem introduced by Czyzowicz et al. [czyzowicz2011boundary]. Given a closed or open fence (a rectifiable Jordan curve) of length and robots of maximum speed respectively, the goal is to find a patrolling schedule that minimizes the maximum latency of any point on the fence. Notice that our problem focuses on a discrete set of sites while the fence patrolling problem focuses on visiting all points on a continuous curve. For an open fence (a line segment), a simple partition strategy is proposed, in which each robot moves back and forth in a segment whose length is proportional to its speed. The best solution using this strategy gives the optimal latency if all robots have the same speed and a approximation of the optimal latency when robots have different maximum speeds. Later, the approximation ratio was improved to by Dumitrescu et al. [dumitrescu2014fence] allowing the robots to stop. Finally, this ratio is improved to by Kawamura and Soejima [kawamura2015simple] and the speeds of robots are varied in the patrolling process.
Challenges. For scheduling multiple robots, a number of new challenges arise. One is that already for and all sites of weight the optimal schedules may have very different structures. For example, if the sites form a regular gon for sufficiently large , as in Figure 1 (left), an optimal solution would place the two robots at opposite points on the gon and let them traverse the gon at unit speed in the same direction. If there are two groups of sites that are far away from each other, as in Figure 1 (middle), it is better to assign each robot to a group and let it move along a TSP tour of that group. Figure 1 (middle) also shows that having more robots will not always result in a lower maximum latency. Indeed, adding a third robot in Figure 1 (middle) will not improve the result: during any unit time interval, one of the two groups is served by at most one robot, and then the maximum latency within that group equals the maximum latency that can already be achieved by two robots for the whole problem. The two strategies just mentioned—one cycle with all robots evenly placed on it, or a partitioning of the sites into cycles, one cycle per robot exclusively—have been widely adopted in many practical settings [elmaliach2009multi, portugal2014finding]. Chevaleyre [chevaleyre2004theoretical] studied the performance of the two strategies but did not provide any bounds.
Note that the optimal solutions are not limited to the two strategies mentioned above. For example, for three robots it might be best to partition the sites into two groups and assign two robots to one group and one robot to the other group. There may even be completely unstructured solutions, that are not even periodic. See Figure 1 (right) for an example. There are four sites at the vertices of a square with two robots that initially stay on two opposite corners. will choose randomly between the horizontal or vertical direction. Correspondingly, robot always moves in the opposite direction of . In this way, all sites have maximum latency which is optimal. This solution is not described by cycles for the robots, and is not even periodic. Observe that for a single robot, slowing down or temporarily stopping never helps to reduce latency. But for multiple robots, it is not easy to argue that there is an optimal solution in which robots never slow down or stop.
When sites have different weights, intuitively the robots have to visit sites with high weights more frequently than others. Thus, coordination among multiple robots becomes even more complex.
Our results. We present a number of exact and approximation algorithms which all run in polynomial time. In Section 3 we consider the weighted version in the general metric setting and presented an algorithm with approximation factor of , where and are the maximum weight and minimum weight respectively. The main insight is to obtain a good assignment of the sites to the robots. We first round up all the weights to powers of two, which only introduces a performance loss by a factor of two. The number of different weights is in the order of . Given a target maximum weighted latency , we obtain the minmax tree cover for each set of sites of the same weight , for the smallest possible value such that the max tree weight in the tree cover is no greater than . Then we assign the sites to the robots sequentially by decreasing weights. Each robot is assigned a depot tree with one of the vertices as the depot vertex. The subset of vertices of a new tree are allocated to existing depots/robots if they are sufficiently nearby; and if otherwise, allocated to a ‘free’ robot. We show that if we fail in any of the operations above (e.g., trees in a minmax tree cover are too large or we run out of free robots), is too small. We double and try again. We prove that the algorithm succeeds as soon as , where is the optimal weighted latency. At that point we can start to design the patrol schedules for the robots, by using the algorithm in [alamdari2014persistent].
In Section 4 we consider the special case where all the sites are points in . When the sites have uniform weights, there is always an optimal solution consisting of disjoint zigzag schedules (a zigzag schedule is a schedule where a robot travels back and forth along a single fixed interval in ), one per robot. Such an optimal solution can be computed in polynomial time by dynamic programming.
When these sites are assigned different weights and the goal is to minimize the maximum weighted latency, we show that there may not be an optimal solution that consists of only disjoint zigzags. Cooperation between robots becomes important. In this case, we turn the problem into the TimeWindow Patrolling Problem, the solution to which is a constant approximation to our patrol problem. Again we round the weights to powers of two. In the timewindow problems, we chop the time axis into time windows of length inversely proportional to the weight of a site – the higher the weight, the smaller its window size – and require each site to be visited within its respective time windows. This way we have a approximation solution in time , where the maximum weight is and the minimum weight is .
2 Problem Definition
As stated in the introduction, our goal is to design a schedule for a set of robots visiting a set of sites in such a way that the maximum weighted latency at any of the sites is minimized. It is most intuitive to consider the sites as points in Euclidean space, and the robots as points moving in that space. However, our solutions will actually work in a more general metric space, as defined next. Let be a metric space on a set of sites, where the distance between two sites is denoted by . Consider the undirected complete graph . We view each edge as an interval of length —so each edge becomes a continuous 1dimensional space in which the robot can travel—and we define as the continuous metric space obtained in this manner. From now on, and with a slight abuse of terminology, when we talk about the metric space we refer to the continuous metric space .
Let be a collection of robots moving in a continuous metric space . We assume without loss of generality that the maximum speed of the robots is 1. A schedule for a robot is a continuous function , where specifies the position of at time . A schedule must obey the speed constraint, that is, we require for all . A schedule for the collection of robots, denoted , is a collection of schedules , one for each robot in . (We allow robots to be at the same location at the same time.) We call the schedule of a robot periodic if there exists an offset and period length such that for any integer and any we have . A schedule is periodic if there are and such that for any integer and any we have for all robots . It is not hard to see that in the case that all period lengths are rational, is periodic if and only if the schedules of all robots are periodic.
We say that a site is visited at time if for some robot . Given a schedule , the latency of a site is the maximum time duration during which is not visited by any robot. More formally,
We only consider schedules where the latency of each site is finite. Clearly such schedules exists: if denotes the length of an optimal TSP tour for the given set of sites, then we can always get a schedule where by letting the robots traverse the tour at unit speed at equal distance from each other. Given a metric space and a collection of robots, the (multirobot) patrolscheduling problem is to find a schedule minimizing the weighted latency , where site has weight and maximum latency .
Note that it never helps to move at less than the maximum speed between sites—a robot may just as well move at maximum speed and then wait for some time at the next site. Similarly, it does not help to have a robot start at time “in the middle” of an edge. Hence, we assume without loss of generality that each robot starts at a site and that at any time each robot is either moving at maximum speed between two sites or it is waiting at a site.
3 Approximation Algorithms in a General Metric
For sites with weights in a general metric space , we design an algorithm with approximation factor for minimizing the max weighted latency of all sites by using robots of maximum speed of , where . Without loss of generality, we assume that the maximum weight among sites is 1. We first round the weight of each site to the least dyadic value and solve the problem with dyadic weights. That is, if node has weight , we take . Clearly, . This will only introduce another factor of in the approximation factor on the maximum weighted latency. In the following we just assume the weights are dyadic values. Suppose the smallest weight of all sites is . Denote by the collection of sites of weight . could be empty. Let denote the collection of all nonempty sets , . Note that . We assume we have a approximation algorithm available for the minmax tree cover problem. The currently bestknown approximation algorithm has [xu2013approximation].
The intuition of our algorithm is as follows. We first guess an upper bound on the optimal maximum weighted latency and run our algorithm with parameter . If our algorithm successfully computes a schedule, its maximum weighted latency is no greater than . If our algorithm fails, we double the value of and run again. We prove that if our algorithm fails, the optimal maximum weighted latency must be at least . Thus, when we successfully find a schedule, its maximum weighted latency is an approximation to the optimal solution. The following two procedures together provide what is needed.

Algorithm robot assignment(), returns False when there does not exist a schedule with max weighted latency , or, returns groups: , where includes a set of trees that are assigned to robot . Every site belongs to one of the trees and no site belongs to two trees in the union of the groups. For robot , one of the trees in is called a depot tree and one vertex with the highest weight on the depot tree is a depot for , denoted by .

With the trees assigned to one robot , Algorithm Single Robot Schedule() returns a singlerobot schedule such that every site covered by has maximum weighted latency .
Denote by the set of vertices of a tree and by the distance between two sites and . See the pseudo code of the two algorithms.
robot assignment () 1:for every set 2: for 1 to 3: Run algorithm to obtain a minmax tree cover on . 4: smallest integer s.t. the max weight of trees in is 5: If there is no such then return False 6: 7:Set all robots as “free” robots, i.e., not assigned a depot tree. 8:for 0 to Assign trees to robots 9: for every tree in 10: 11: for every nonfree robot 12: Let be such that 13: 14: Compute and assign it to robot . 15: 16: if 17: if no free robot 18: Return False. 19: else 20: Pick a free robot and set 21: Pick an arbitrary vertex in and set 22: For each robot , let be the collection of trees assigned to , including its depot tree, and return the collections .
The following observation is useful for our analysis later. In robot assignment(), the depots and , with , for different robots have distance more than .
Proof.
The depot vertices, in the order of their creation, have nonincreasing weight. Thus, we could assume without loss of generality that is the depot that is created later than . is more than away from the depot . ∎
Let be depot sites, ordered such that , defined as in Algorithm robot assignment(). The optimal schedule minimizing the maximum weighted latency for robots to serve has weighted latency .
Proof.
Let denote the speed of a robot at time . Let be a schedule of latency . The proof proceeds in rounds. The goal of the th round is to change the schedule into a new schedule that has a stationary robot at site . To keep the latency at , we will increase the speed of some other robots. We will show the following claim.
Claim. After the th round we have a schedule of latency such that
there is a stationary robot at each of the sites with ,
at any time we have , where the sum is overall robots.
This claim implies that after the th round we have a schedule of latency with stationary robots at , and one robot of maximum speed serving the sites and . The distance between these sites is at least , so the latency of our modified schedule satisfies . This is what is needed in the Lemma.
The proof of the claim is by induction. Suppose the claim holds after the th round. Thus we have a stationary robot at each of the sites , and at any time we have . Note that for , the required conditions are indeed satisfied. Now consider the site .
Define
to be the moments in time where there is at least one robot at
and all robots present at are leaving. In other words, are the times at which is about to become unoccupied. If no such time exists then there is always a robot at , and so we are done. Let be the moments in time where a robot arrives at while no other robot was present at just before that time, that is, becomes occupied. Assuming without loss of generality that , we haveConsider an interval . By definition . Let be a robot leaving at time and suppose is at position at time . Let be a robot arriving at at time . We modify the schedule such that stays stationary at , while travels to via . We increase the speed of by adding the speed of to it, that is, for any we change the speed of at time to . Since is now stationary at , this does not increase the sum of the robot speeds. Moreover, with this new speed, will reach at time . Finally, observe that this modification does not increase the latency. Indeed, the sites have a stationary robot by the induction hypothesis, and all sites are at distance at least from so during the robots and did not visit any of these sites in the unmodified schedule. ∎∎
SingleRobotSchedule is the depot tree and is the weight of the vertices in . 1:. 2:for 0 to 3: Compute a tour of length at most on the vertices in . 4: Partition into a collection of at most paths such that for all . 5: is the path in to be traversed next 6:Put the robot on the first vertex of path and set 7:while True 8: Let the robot traverse path 9: 10: Let the robot move from the end of to the start of 11: Set and set
The proofs for the following two Lemmas can be found in the appendix.
Given , if robot schedule() returns False then , where is the optimal maximum weighted latency.
If robot schedule() does not return False, each robot is assigned at most trees and a depot site such that

one of the trees is the depot tree which includes a depot . has the highest weight among all sites assigned to this robot;

all other vertices are within distance from the depot, where is the weight of ;

each tree has vertices of the same weight and the sum of tree edge length is at most .
Now we are ready to present the algorithm for finding the schedule for robot to cover all vertices in the family of trees , as the output of robot schedule(). We apply the algorithm in [lingasbamboo, alamdari2014persistent] for the patrol problem with one robot, with the only one difference of handling the sites of small weights. The details are presented in the pseudo code Single Robot schedule() which takes a set of trees. By Lemma 3, there are at most trees assigned to one robot, i.e., . For a tree (a path ) we use (resp. ) as the sum of the length of edges in (resp. ).
The Single Robot Schedule(), , returns a schedule for one robot that covers all sites included in such that the maximum weighted latency of the schedule is at most .
To analyze the running time, we use the best known minmax tree cover algorithm [xu2013approximation] with running time . In Algorithm robot Assignment, from line 2 to line 8 it takes time in the order of (suppose ). From line 9 to line 24, we assign some subset of vertices in each tree to occupied robots. The running time is , where is the time to compute the minimum spanning tree for (line 16). The total running time is for Algorithm robot Assignment. Algorithm Single Robot Schedule takes time, since a robot is assigned at most sites. Thus, given a value , it takes to either generate patrol schedules for robots with approximation factor or confirm that there is no schedule with maximum weighted latency .
To solve the optimization problem (i.e., finding the minimum ) if there are fewer than than sites, we put one robot per site. Otherwise, we start with parameter taking the distance between the closest pair of the sites, and double whenever the decision problem answers negatively. The number of iterations is bounded by . Notice that is bounded, e.g., at most th of the traveling salesman tour length.
The approximation algorithm for robot patrol scheduling for weighted sites in the general metric has running time with a approximation ratio, where with and being the maximum and minimum weight of the sites and is the optimal maximum weighted latency.
4 Sites in
In this section we consider the case where the sites are points in . We start with a simple observation about the case of a single robot. After that we turn our attention to the more interesting case of multiple robots.
We define the schedule of a robot in to be a zigzag schedule, or zigzag for short, if the robot moves back and forth along an interval at maximum speed (and only turns at the endpoints of the interval).
Observation 1.
Let be a collection of sites in with arbitrary weights. Then the zigzag schedule where a robot travels back and forth between the leftmost and the rightmost site in is optimal for a single robot.
Next, for multiple robots, as long as the sites have uniform weights, we show there is an optimal schedule consisting of disjoint zigzags. Both proofs are in the appendix.
Let be a set of sites in , with uniform weights, and let be the number of available robots, where . Then there exists an optimal schedule such that each robot follows a zigzag schedule and the intervals covered by these zigzag schedules are disjoint.
With Theorem 4, the minmax latency problem reduces to the following: Given a set of numbers and a parameter , compute the smallest such that can be covered by intervals of length at most . When is stored in sorted order in an array, can be computed in time [abrahamsen2017range, Theorem 14]. If is not sorted, there is a lower bound in the algebraic computation tree model [ben1983lower], since for element uniqueness reduces to this problem.
We now turn our attention to sites in with arbitrary weights. In this setting there may not exist an optimal solution that is composed of disjoint zigzags (see the appendix for details), which makes it difficult to compute an optimal solution. Hence, we present an approximation algorithm. Let be the ratio of the largest and smallest weight of any of the sites. Our algorithm has a approximation ratio and runs in polynomial time when , the number of robots, is a constant, and is polynomial in . More precisely, the running time of the algorithm is .
Instead of solving the robot patrolscheduling problem directly, our algorithm will solve a discretized version that is defined as follows.

The input is a set of sites in , each with a weight of the form for some nonnegative integer and such that .

Given a value , which we call the window length, we say that a robot schedule is valid if the following holds: each site is visited at least once during every time interval of the form , where is a positive integer. The goal is to find the smallest value that admits a valid schedule, and to report the corresponding schedule.
We call this problem the TimeWindow Patrolling Problem. The following lemma shows that its solution can be used to solve the patrolscheduling problem. The proof can be found in the appendix. Suppose we have a approximation algorithm for the robot TimeWindow Patrolling Problem that runs in time. Then there is approximation algorithm for the robot patrol scheduling problem that runs in time.
An algorithm for the TimeWindow Patrolling Problem. We now describe an approximation algorithm for the TimeWindow Patrolling Problem. To this end we define a class of socalled standard schedules, and we show that the best standard schedule is a good approximation to the optimal schedule. Then we present an algorithm to compute the best standard schedule.
Standard schedules, for a given window length , have length (that is, duration) and they are composed of socalled atomic schedules. An atomic schedule is a schedule that specifies the motion of a single robot during a time interval of length . It is specified by a 6tuple
where and . Roughly speaking, denote the first, last, leftmost and rightmost site visited during the time interval, and indicate how long the robot can spend traveling before arriving at resp. after leaving . Next we define this more precisely.
There are two types of atomic schedules. For concreteness we explain the different types of atomic schedules for the time interval , but remember that an atomic schedule can be executed during any time interval of length .
 Type I:

, and and , where and are the leftmost and rightmost site among the four sites, respectively. (We allow one or more of these four sites to be identical.) A Type I atomic schedule specifies the following movement of the robot.

At time the robot is at site .

At time the robot is at site .

The robot will visit and during the interval using the shortest possible path, which must have length at most . For example, if , then the robot will use the path and we require .

The robot does not visit any sites during but is traveling, towards some site to be visited later. In fact, the robot may pass other sites when it is traveling during but these events are ignored—they are not counted as visits.

 Type II:

, and and . A Type II atomic schedule specifies the following movement of the robot.

The robot does not visit any sites during but is traveling. Again, the robot may pass over sites during its movement. One way to interpret Type II atomic schedules is that the robot visits a dummy site at time and then spends the entire interval traveling towards some site to be visited in a later time interval.

Note that in both cases. This will no longer be the case, however, when we start concatenating atomic schedules, as explained next.
Consider the concatenation of atomic schedules, for some , and suppose we execute this concatenated schedule during a time interval of the form . How the robots travel exactly during interval is important for sites of weight more than , since such sites need to be visited multiple times. But sites of weight at most need to be visited at most once during , and so for those sites it is sufficient to know the leftmost and rightmost visited site. Thus our algorithm will concatenate atomic schedules in a bottomup manner. This will be done in rounds, where the th round will ensure that sites of weight are visited. The concatenated schedule will be represented in a similar way as atomic schedules. Next we describe this in detail.
Let denote the collection of all feasible concatenations of atomic schedules. Thus is simply the collection of all atomic schedules, and can be obtained from by combining pairs of schedules. A schedule will be represented by a 6tuple
As before, denote the first, last, leftmost and rightmost site visited during the time interval. Furthermore, indicates how much time the robot can spend traveling from another site before arriving at , and indicates how much time the robot can spend traveling towards another site after leaving at . The values and can now take larger values than in an atomic schedule. In particular,
where . Note that certain values may only arise in certain situations. For example, we can only have for a schedule that is the concatenation of atomic schedules of type II, which means that and .
We denote the concatenation of two schedules by . The representation of can be computed from the representations of and :
Furthermore, we have and . Finally,
Note that not any pair of schedules can be combined: it needs to be possible to travel from to in the available time. More precisely, assuming and —otherwise a concatenation is always possible—we need .
We now define a standard robot schedule for window length to be a robot schedule with the following properties.

The schedule for each robot belongs to , i.e., each robot starts at a site at time , and is the concatenation of atomic schedules.

It is a valid robot schedule for the TimeWindow Patrolling Problem, for the time period .
A standard schedule can be turned it into an infinite cyclic schedule, by executing and its reverse schedule in an alternating fashion. (In each robot simply executes its schedule in backward.) Note that is a valid schedule since is valid, and so the schedule alternating between and is valid. The following lemma shows that the resulting schedule is a good approximation of an optimal schedule for the TimeWindow Patrolling Problem (proof in the appendix). Let be the minimum window length that admits a valid schedule for the TimeWindow Patrolling Problem, and let be the minimum window length that admits a valid standard schedule. Then .
We now present an algorithm that, given a window length , decides if a standard schedule of window length exists. Since such a schedule is the concatenation of atomic schedules we basically generate all possible concatenated schedules iteratively from to . Recall that we need to generate a robot schedule, that is, a collection of schedules (one for each robot). We denote by the set of all robots schedules, where each of the schedules is chosen from , such that each site of weight at least is visited at least once by one of the robots. If and are two robot schedules, then we use to denote the robot schedule .
Note that the concatenation of one pair of singlerobot schedules may be the
same as—or, more precisely, have the same representation as—the
concatenation of a different pair of schedules. This may also result
in robot schedules that are the same.
To avoid generating too many robot schedules, our algorithm will keep only one schedule of each representation. Our algorithm is now as follows.
ConstructSchedule
1: all possible atomic schedules
2:
all possible combinations of schedules from such that all sites of weight 1 are visited by at least one of the schedules
3:for to Recall that
4:
5: for every pair of robot schedules
6:
If and can be concatenated and the resulting
schedule visits every site of weight at least once
then add to .
7: Remove any duplicates from .
8:If then return yes otherwise return no.
The algorithm above only reports if a standard schedule of window length exists, but it can easily be modified such that it reports such schedule if it exists. To this end we just need to keep, for each representation in for the current value of , an actual schedule. Doing so will not increase the timebound of the algorithm. The main theorem in this section is as below, with proof in the appendix).
A 12approximation of the minmax weighted latency for sites in with robots, for a constant , can be found in time , where the maximum weight of any site is and the minimum weight is .
5 Conclusion and Future Work
This is the first paper that presents approximation algorithms for multirobot patrol scheduling minimizing maximum weighted latency in a metric space. The obvious open problem is to improve the approximation ratios for both the general metric setting and the 1D setting.
Acknowledgement: Gao, Wang and Yang would like to acknowledge supports from NSF CNS1618391, DMS1737812, OAC1939459. Raichel would like acknowledge support from NSF CAREER Award 1750780.
References
Appendix
Appendix A approximation for unweighted robot scheduling
We can obtain an approximation algorithm for the patrolscheduling problem in general metric spaces by making a connection to the path cover problem, which is to find paths covering the sites such that the maximum length of the paths is minimized. Suppose we have an approximation algorithm for the path cover problem. Let be the maximum path length in an optimal path cover. For each of the paths in the cover, connect the last site with the first site to create a tour of length at most . Now let the robots follow these tours, obtaining a schedule with maximum latency bounded by . Note that if denotes the optimal latency for the patrolscheduling problem, then . Indeed, in a solution of latency , all sites must be visited during any time interval of length , and so the paths followed by the robots during this interval (which have length at most ) are a valid solution to the path cover problem. Thus we obtain a approximation for the patrol scheduling problem.
Similarly, we can solve the patrolscheduling problem with an extra factor of two in the approximation ratio, using the minmax tree cover problem, which is to find disjoint trees to cover the sites such that the maximum tree weight—the weight of a tree is the sum of its edge weights—is minimized, or the minmax cycle cover problem [xu2013approximation], which finds cycles to cover all sites and the length of the longest cycle is minimized. The proof for both claims is similar to the case of path cover.
For sites in a metric space, an approximation for the minmax tree cover problem gives a approximation for the patrol scheduling problem.
Proof.
To show the connection, take an optimal patrol schedule from time to time , where is the latency of . This creates paths that collectively cover all sites. Denote by the visiting sequence of robot within this interval. Starting from , we shortcut the paths by removing duplicate visits to the same site. Specifically, the visit by robot to a site is removed if has already been visited by a robot with . If a site is removed with and to be the preceding and succeeding site respectively, the robot moves directly from to ; by the triangle inequality, the modified path is not longer. This produces at most disjoint paths that cover all sites, thus a tree cover. The weight of each path is at most . Thus , where is the optimal weight of a minmax tree cover with trees. On the other hand, for any tree cover with maximum weight , we can traverse each tree to create a tour with length no longer than . Let the robots follow the tours, thus obtaining a schedule with latency bounded by . Hence, an approximation for the minmax tree cover problem gives a approximation for the patrol scheduling problem. ∎
An approximation for the minmax cycle cover problem gives a approximation for the patrol scheduling problem.
Proof.
If we take an optimal patrol schedule from time 0 to , and ask each robot move back to its starting point, then we get cycles of length at most . Hence, , where is the minmax cycle length of an optimal cycle cover. This implies that an approximation for the minmax cycle cover problem gives a approximation for the patrol scheduling problem. ∎
In short, we can obtain algorithms with approximation factor , where is the approximation factor for any of the problems, path cover [arkin2006approximations], minmax tree cover [khani2014improved, xu2013approximation], or minmax cycle cover [xu2013approximation]. To the best of our knowledge, the best approximation ratio for any of these problems is (namely for the minmax tree cover problem). In this paper we try to get approximation factors for the multirobot patrolscheduling problem better than .
Appendix B Proofs for MinMax Weighted Latency in General Metric
Lemma 3.
Given , if robot schedule() returns False then , where is the optimal maximum weighted latency.
Proof.
There are two cases of the algorithm returning False. We discuss them separately.
In the first case, there is a value such that the maximum tree weight of a approximation of the minmax tree cover is larger than for all (Line 7). It implies that the optimal value of minmax tree cover is larger than for sites in . Since the robot solution also cover all the sites in , is also a lower bound of the optimal latency (see the appendix for details). Thus, .
In the second case, there is a tree with vertices that are far away from existing depots and there is no free robot anymore. Notice that there are precisely depots at this moment. Suppose the depots are and there is another vertex which is at distance at least from the depot of weight , for . Apply Lemma 3, the latency of the optimal schedule visiting only these sites is at least , so is the optimal latency . ∎
Lemma 3.
If robot schedule() does not return False, each robot is assigned at most trees and a depot site such that

one of the trees is the depot tree which includes a depot . has the highest weight among all sites assigned to this robot;

all other vertices are within distance from the depot, where is the weight of ;

each tree has vertices of the same weight and the sum of tree edge length is at most .
Proof.
Most of the claims are straightforward from the algorithm robot schedule(). A tree assigned to a robot has vertices coming from the vertices of the same tree in the minmax tree cover (obtained on Line 4). Thus the vertices have the same weight (say ). These vertices are within distance , from the depot , where is the weight of , by Line 15. Further, the tree is always taken as a minimum spanning tree on its vertices. Thus the sum of the edge length on is no greater than that of the original tree (with potentially more vertices), which is no greater than , by Line 5.
It remains to prove that each robot is assigned at most trees. Note that the loop of line 8 in the algorithm has iterations and each loop of line 9 has at most iterations. Moreover, in one iteration of lines 13 to 23 each robot is assigned at most one tree: it may be assigned a tree in line 16 when it is already nonfree, and in line 22 when it was still free. Hence, is assigned at most trees. ∎
Lemma 3 .
The Single Robot Schedule(), , returns a schedule for one robot that covers all sites included in such that the maximum weighted latency of the schedule is at most .
Proof.
By Lemma 3 the distance between the depot and any other vertices on tree is at most , where is the weight of the depot. By triangle inequality, the distance of any two sites (either on the same tree or on different trees) is at most . Consider any site