Controlling and planning paths of small autonomous marine vehicles [petres2007path] such as wave and current gliders [kraus2012wave], active drifters [lumpkin2007measuring], buoyant underwater explorers, and small swimming drones is important for many geo-physical [lermusiaux2017future] and engineering [bechinger2016active] applications. In realistic open environments, these vessels are affected by disturbances like wind, waves and ocean currents, characterized by unpredictable (chaotic) trajectories. Furthermore, active control is also limited by engineering and budget aspects as for the important case of unmanned drifters for oceanic exploration [centurioni2018drifter, roemmich2009argo]. The problem of (time) optimal point-to-point navigation in a flow, known as Zermelo’s problem [zermelo1931], is interesting per se in the framework of Optimal Control Theory [bryson2018].
In this paper, we extend some of the results from a recent theoretical and numerical study [biferale2019zermelo], tackling Zermelo’s problem for navigation in a two-dimensional fully turbulent flow in the presence of an inverse energy cascade, i.e. with chaotic, multi-scale and rough velocity distributions [alexakis2018cascades], see Fig. 1 for a summary of the problem. In such a flow, even for time-independent configurations, trivial or naive navigation policies can be extremely inefficient and ineffective if the set of actions by the vessel are limited. We show that an approach based on semi-supervised AI algorithms using actor-critic Reinforcement Learning (RL) [sutton2018] is able to find robust quasi-optimal stochastic policies that accomplish the task. Furthermore, we compare RL with solutions from Optimal Navigation (ON) theory [pontryagin2018mathematical] and show that the latter is of almost no practical use for the case of navigation in turbulent flows due to strong sensitivity to the initial (and final) conditions, in contrast to what happens for simpler advecting flows [schneider2019optimal]. RL has shown to have promising potential to similar problems, such as the training of smart inertial particles or swimming objects navigating intense vortex regions [colabrese2018smart, colabrese2017flow, gustavsson2017finding].
We present here results from navigating one static snapshot of 2D turbulence (for time-dependent flows see [biferale2019zermelo]). In Fig. 1 we show a sketch of the setup. Our goal is to find trajectories (if they exist) that join the region close to with a target close to in the shortest time, supposing that the vessels obey the following equations of motion:
where is the velocity of the underlying 2D advecting flow, and is the control slip velocity of the vessel with fixed intensity and varying steering direction: , where the angle is evaluated along the trajectory, . We introduce a dimensionless slip velocity by normalizing with the maximum velocity of the underlying flow: Zermelo’s problem reduces to optimize the steering direction in order to reach the target [zermelo1931]. For time independent flows, optimal navigation (ON) control theory gives a general solution[techy2011optimal, mannarini2016visir]. Assuming that the angle is controlled continuously in time, the optimal steering angle must satisfy the following time-evolution:
where is evaluated along the agent trajectory obtained from Eq. (1). The set of equations (1-2) may lead to chaotic dynamics even for time-independent flows in two spatial dimensions. Due to the sensitivity to small perturbations in chaotic systems the ON approach becomes useless for many practical applications.
RL applications [sutton2018] are based on the idea that an optimal solution can be obtained by learning from continuous interactions of an agent with its environment. The agent interacts with the environment by sampling its states , performing actions and collecting rewards . In our case the vessel acts as the agent and the two-dimensional flow as the environment. In the approach used here, actions are chosen randomly with a probability that is given by the policy , given the current flow-state . The goal is to find the optimal policy that maximizes the total reward, accumulated along one episode. For the purpose to find the fastest trajectory we used composed of three different terms;
The first term accumulates a large penalty if it takes long for the agent to reach the end point, while the second and third terms describe the change in free-flight time to the target, i.e. the difference in time it would take, if the flow is neglected, to reach the target from the locations at this and the previous state change. [art1]. It follows the the total reward is proportional to minus the actual time taken by the trajectory to reach the target,
neglecting a constant term that does not depend on the training, see [biferale2019zermelo] and Fig. 1 for precise definition of flow-states and agent-actions. An episode is finalized when the trajectory reaches the circle of radius around the target. In order to converge to robust policies each episode is started with a uniformly random position within a given radius,
, from the starting point. To estimate the expected total future reward we follow the one-step actor-critic method[sutton2018] based on a gradient ascent in the policy parametrization. In the second part of our work, we modify the navigation setup by allowing the unmanned vessel to turn off its ‘engine’, to allow it to navigate just following the flow without its own propulsion speed. In this framework, navigation can be optimal with respect to minimal energy consumption rather than time, or to a tradeoff between energy consumption and time. To repeat the training of the optimal policy taking into account of both aspects, energy and time, we modified our RL scheme as follows. First, we added the new action to turn off the vessel propulsion speed, i.e. letting , in addition to the eight possible navigation angles considered before. Second, we modified the reward function in order to weigh the relative importance of navigation time and energy consumption. This was obtained by adding a new term describing the time the vessel consumes energy, , to the instantaneous reward in Eq.(3) as follows
The total reward becomes proportional to minus the sum of the two time contributions,
The time counts the time the vessel navigates with self propulsion, giving a total time where energy is spent. The factor weighs the importance of energy consumption time and total navigation time in the optimisation. We have repeated the training of the RL optimal policy with the new time-energy combined goals in the time-independent flow shown in Fig. 1, as well as in a more realistic time-dependent 2D turbulent flow. The latter was obtained by solving the incompressible Navier-Stokes equations on a periodic square domain with side length and number of collocation points, see [biferale2019zermelo] for more details about the flow.
3 Results (time-independent flows)
3.1 Shortest time, no energy constraints
In the right part of Fig. 1 we show the main results comparing RL and ON approaches [biferale2019zermelo]. The minimum time taken by the best trajectory to reach the target is of the same order for the two methods. The most important difference between RL and ON lies in their robustness as seen by plotting the spatial density of trajectories in the right part of
Fig. 1 for the optimal policies of ON and RL with three values of .
We observe that the RL trajectories (blue coloured area) form a much more coherent cloud in space, while the ON trajectories (red coloured area) fill space almost uniformly. Moreover, for small navigation velocities, many trajectories in the ON system approach regular attractors, as visible by the high-concentration regions. The rightmost histograms in Fig. 1 show a comparison between the probability of arrival times for the trajectories illustrated in the two-dimensional domain, providing a quantitative estimation
of the better robustness of RL compared to ON.
Other RL algorithms, such as Q-learning[sutton2018], could also be implemented and compared with other path search algorithms such as which is often used in computer science [russell2002artificial, lerner2009algorithmics].
3.2 Minimal energy consumption
In this section we present results on the simultaneous optimisation of minimal travel time and energy consumption. To begin with, we consider the same time-independent flow as in the previous section.
In Fig. 2 we show three sets of trajectories following three policies obtained by optimising the reward (5) for , and . The trajectories are superposed on the flow velocity amplitude (left panel) and the Okubo-Weiss parameter [okubo1970horizontal, weiss1991dynamics] (right panel), defined as;
Here is the fluid-gradient matrix as defined after Eq. (2). The decomposition in Eq. (6) is particularly useful to distinguish strain dominated (, orange-red colors) from vortex dominated (, green-blue colors) regions of the flow. Colored regions of the trajectories show where the action is to have the propulsion on and white regions show where the propulsion is off. When , the energy consumption does not matter for the reward, and the only difference compared to the case in the previous section is that the policy can now choose one additional action: the zero self propulsion speed. However, as seen from Fig. 2, this action is rarely chosen when , and the vessel navigates with a constant self-propelling velocity. On the other hand, when the energy-dependent reward is activated, as in the case of , we observe a difference in the optimal path followed by the vessel. This is because it has to balance the penalties from the total navigation time and the time with self-propulsion. When becomes larger, this difference in the optimal path becomes more significant. For we observe trajectories that are much longer and dominated by passive navigation, just following the flow.
To have a more accurate comparison of the arrival time to the target, , and the total active navigation time, , for the different values of , we show in Fig. 3 the evolution of these two terms as functions of the episode number during the training of the three different policies. The total reward (5) is a linear combination of these two terms, where is multiplied by the factor . We first observe that the training converges after around 10k episodes. Second, we see that for , both and lies close to each other for all episodes, suggesting that the optimal policy never found a state where it is better to navigate with zero propulsion to reach the target faster. For values of larger than zero, the found policies end up with below the value of the case, with the consequence of saving energy even though the time to reach the target is longer.
A final result for this case of time-independent flow is shown in Fig. 4 , where we present the Probability Density Functions (PDFs) of the total navigation time,
, where we present the Probability Density Functions (PDFs) of the total navigation time,(main panel) and of the power-on navigation time (inset). The distributions are sampled over 40k trajectories with initial conditions close to that follows the optimal policies obtained for five values of .
These PDFs show that for , both times are of the order of , where is the free-flight time to go from point to point with a fixed self propulsion speed and without flow. For larger , the total navigation time increases while the power-on time decreases monotonically up to . Increasing up to we do not observe further reduction of , the PDF only becomes more peaked around the value as found for . This result suggests that we have found the minimal amount of propulsion required for the vessel to be able to navigate to the target.
4 Results (time-dependent flow)
In this last section we consider the same optimal navigation problem as in the previous section, but with a more realistic time-dependent flow. For this case we adopted a small self-propulsion velocity, , i.e. only of the maximal flow velocity amplitude. In Fig. 5 we present, as in the previous section, the PDFs of both (solid lines full symbols) and (dashed lines empty symbols) obtained over 60k different trajectories following the converged optimal policies for , and . These results show that, as for the time-independent case, when RL finds a solution that spends less energy at the cost of a longer total navigation time compared to the solution when . Let us stress that with a probability of the order of in we observed trajectories that were not able to reach the final target, as indicated by the failure bars reported in Fig. 5.
Finally, Fig. 6 shows six different snapshots at different times during the evolution of two different sets of trajectories that follows the optimal policies obtained for and . The trajectories are superposed on the time-dependent flow velocity. Similar to Fig. 2, white regions on the trajectories show where the vessel is navigating with zero self propulsion speed. We remark that even when , the found optimal policy chooses the action in the region close to the target. As a result, the PDFs of the total navigation time and the power-on time are not identical even for the case of . This is a very nice example of the fact that the resulting policy in RL benefits from the added control when the set of allowed actions is enlarged and that, in our particular application, passively moving with the flow can be better than navigating when the flow blows you in the right direction, independently of the requirement to minimize energy.
We have first discussed a systematic investigation of Zermelo’s time-optimal navigation problem in a realistic 2D turbulent flow, comparing both RL and ON approaches [biferale2019zermelo]. We showed that RL stochastic algorithms are key to bypass unavoidable instability given by the chaoticity of the environment and/or by the strong sensitivity of ON on the initial conditions in the presence of non-linear flow configurations. RL methods offer also a wider flexibility, being applicable to energy-minimization problems and in situations where the flow evolution is known only in a statistical sense as in partially observable Markov processes. Let us stress that, instead of starting from a completely random policy as we did here, it is also possible to implement RL to improve a-priori policies designed for a particular problem. For example, one can use an RL approach to optimize an initial trivial policy, where the navigation angle is selected as the action that points most directly toward the target. In the second part of this work, we further analyzed the more complex problem where the optimization of the total navigation time is balanced by the energy consumption required to reach the target. Also in this case, we found that RL is able to converge to non-trivial solutions where the vessel navigates most of the time as a passive object transported by the flow, with only a minimum number of corrections to its trajectory required to reach the final target.