## I Introduction

For robots to truly become a viable option for unmanned tasks such as search and rescue operations and unknown environmental exploration, autonomous and safe trajectory planning must be considered as a fundamental design goal during algorithmic design and systems integration. Necessary requirements for robots to autonomously perform such complex tasks include, but are not limited to, online low-level feedback controls, localization, vision, motion planning, high-level reasoning, and reasoning under uncertainty. Currently, all individual components are well-developed, but integrating multiple pieces together into a single system, especially for environments that are not well-known, has proven to be a daunting challenge because of issues related to robustness [robustness]. For example, simultaneous planning, localization, and mapping (SPLAM, or "Active SLAM") is an active area of research that attempts to satisfy some of these requirements. The main challenges to Active SLAM consist of planning under uncertainty in an acceptable amount of time, bridging the gap between sensor data ("semantic mapping"), and ensuring that it is robust enough for a complex platform. Because of these challenges, there are many works addressing a subset of Active SLAM, namely simultaneous localization and planning [slap] or works addressing SLAM, such as [orb-slam, jones_visual-inertial_2011, lsd-slam]. Other Active SLAM works also use very simple research platforms [ActiveSLAM2] or were only tested in simulation [ActiveSLAM].

There are two common frameworks to address the problem of planning under uncertainty, which are explicitly modeling the posterior distributions in a Bayesian setting [prob_planning]

or using a partially-observable Markov decision process (POMDP)

[pomdp]. However, the Bayesian setting is only computationally tractable for the simplest cases (e.g. Gaussian prior and Gaussian observations) and POMDPs suffer from the curse of history and dimensionality and do not sufficiently model an agent’s future intent [rnn_uncertainty]. Recent work addressing Active SLAM using POMDPs also either lacks mapping capability [slap] or requires an inordinate amount of computational resources [pomdpThesis]. As a consequence, there is a need for new path planning architectures for unknown and uncertain environments that addresses the concerns of belief space planning or provides alternative methods that can be ubiquitously applied on most robotic systems.To resolve the above issues, we propose a multifaceted approach that uses model predictive control (MPC), SLAM^{1}^{1}1In this paper, "SLAM" includes visual-inertial odometry with sparse mapping in addition to algorithms that produce denser maps., and recurrent neural network (RNN) algorithms to address the problem of Active SLAM and account for uncertainties in both current and future robot positions. Our architecture is based on MPC because MPC operates online, continually satisfies the dynamic state of the robot over a prediction horizon , and naturally offsets estimation errors [relativefootstepMPC]. The MPC is augmented to be "risk-averse" by considering uncertainty in position from timestep to . This uncertainty is inferred by an RNN, which has been demonstrated to handle time series data, account for temporal factors that directly affect predictions, have shown promise in modeling complex interactions between agents and their environment [rnn_uncertainty, rnn_1] and previously applied to MPC but for industrial processes [lanzetti2019recurrent]. Our RNN model is trained on the positional covariance estimations of a visual-inertial odometry (VIO) system taking readings from an inertial measurement unit (IMU) and camera data as input. By considering the current and future positional uncertainties in the MPC optimization problem, our method can solve for more optimal control actions at each timestep. To facilitate object avoidance, we additionally incorporate an object detection pipeline that uses a deep convolutional neural network (CNN) to recognize obstacles and a feature detector with RGB and depth images to estimate the distance and size of nearby obstacles. We show that by using a trained RNN model to infer positional uncertainties at future timesteps, a robot can demonstrate more evasive behavior to better guarantee collision avoidance without becoming too conservative. Our linearized path planning framework is applied and tested on a complex quadruped robot, which demonstrates our algorithm’s robustness and efficiency in computation, showing the feasibility of extending our work to a wide range of robotic platforms.

### Summary of Our Contributions

(1) We evaluate the feasibility of an online end-to-end path planner that unifies MPC, SLAM, RNN, and an object detector using CNNs to generate paths for unknown and uncertain environments using a non-linear programming solver.

(2) We verify that our quadrupedal robot, ALPHRED [hooks2020ALPHRED], avoids collisions and computes a shorter trajectory while maintaining safety using our method as compared to a more conservative and naive planner.

(3) We propose a novel use of RNNs to estimate positional uncertainties at all future timesteps of the MPC’s prediction horizon.

(4) We integrate all components into a high-fidelity simulation using the quadruped dynamics of ALPHRED (Figure 4). Additionally, we test all components individually either online or offline using hardware (Figure 7).

In the following sections, we will explicitly refer to the simulation or hardware data. The main difference between the model of ALPHRED in simulation versus hardware is that in simulation the model is equipped with an idealistic RGB + dense depth Microsoft Kinect camera, while the actual hardware is equipped with Intel’s Realsense D435i. The idealistic camera publishes both RGB and dense depth images at arbitrarily fast speeds while the RealSense publishes RGB images at 30Hz and dense depth images at only 2Hz.

## Ii Methods

In this section, we provide an overview of our architecture and how our risk-averse MPC propagates uncertainty through its finite time horizon trajectory. In Section II-A, we provide a high-level overview of our path planning algorithm. In Section II-B, we describe our MPC’s mathematical framework for planning and tracking. In Section II-C, we describe the constraints used in our cost functions. In Section II-D, we describe our object detection system using CNNs and keypoint detection on RGB and depth images, and finally in Section II-E, we detail our RNN training and inference procedures (utilizing our SLAM algorithm) for predicting future positional uncertainties used to create collision boundaries.

### Ii-a Architecture Overview

Our path planner is formulated as an MPC optimization problem using a non-linear programming solver [Casadi]. We divide our MPC framework into a planning phase and a tracking phase, with different cost functions for each. In the planning phase, the goal of our MPC is to create waypoints that move a robot closer to a desired position while detecting and avoiding obstacles through measurement updates. Specifically, the object detection algorithm feeds the MPC with the position and size of surrounding obstacles, while the positional uncertainty of the robot at all future timesteps in the MPC prediction horizon is inferred by RNNs. In the tracking phase, we discretize the generated path into segments of fixed temporal length using a cubic polynomial to create a smooth reference trajectory. MPC is used to track this reference trajectory and outputs our desired planar velocity values ( and ). These velocity values are used by our motion tracking controller to generate stable footstep trajectories. Note that dividing our MPC formulation into two phases facilitates lower computation time, and allows for separate control on waypoint generation and creation of custom reference trajectories if desired (see Algorithm 1, Fig. 1, or our accompanied video^{2}^{2}2https://youtu.be/td4K55Tj-U8 for a general overview of our path planning architecture).

### Ii-B General MPC Formulation

#### Ii-B1 Planning Phase

MPC in the planning phase has the following time-invariant linear discretized model:

(1) |

where represents our state variables (planar waypoint position), and represents our control variables (planar velocity). We also initialized our state and control variables to zero before run-time.

Because we have a motion tracking controller to incorporate robot dynamics (see Section III-A), our A and B matrices can assume a simple point mass:

= , =

where is the time between taking proprioceptive and exteroceptive sensor measurements (e.g., RGB-D images and odometer readings), and

represents a non-unit variance random Gaussian noise (

), whererepresents the standard deviation of planar velocity).

The goal of our cost function in the planning phase (equation (2)) is to find the optimal control value that minimizes the distance from the current and predicted states () to the goal state () – where is given by the results of localization. Note, that we use instead of in our cost function to represent the inclusion of a slack decision variable, (the slack variable has no role in our discretized model equation, but does affect the cost function through - see II-C), so that .

(2) |

s.t. i@, ii@, iii@, v@ (see Table I)

#### Ii-B2 Tracking Phase

MPC in the tracking phase has the following time-invariant linear discretized model:

(3) |

where represents our state variables (desired planar position and yaw or heading angle), and represents our control variables (desired planar velocity and yaw rate). Matrices and are the same as shown in (1), except for an additional row/column for yaw and yaw rate.

The goal of our cost function in the tracking phase (equation (4)) is to output desired planar velocity and yaw rate ( and ) values that follow a reference trajectory.

(4) |

s.t. i@, ii@, iii@, iv@ (see Table I)

and

are obtained by cubic interpolation (equation (

5)) with end points specified by the MPC planning phase from and (the reason we discretize to instead of is to ensure there are enough reference points for MPC to "look-ahead").(5) |

### Ii-C MPC Constraints

#### Ii-C1 Constraints i@ - iv@

Constraint i@ represents multiple shooting constraints which facilitate solving non-linear programs [MS_constraints]. The limits on state variables (i.e., map constraints), and control variables (limits on velocity) are represented by Constraint ii@ (note that the slack decision variable should be set as ). If there is apparent jerk during path planning, it may be necessary to include Constraint iii@, where represents the limit on acceleration (, , and ) and represents velocity ( and ). Orienting the robot along the planned trajectory can be achieved using Constraint iv@, and setting the limit on to be much smaller than the limit on (which points directly along the path) in the body frame. Because MPC outputs velocities in the inertial reference frame (), a rotation matrix is required to transform these velocities into the correct frame of reference.

#### Ii-C2 Constraint V - Collision Boundary with Slack Variable

Our obstacle avoidance constraints are given by Constraint v@, which ensure that the collision boundary of the robot does not collide with detected obstacles (note, that because these constraints are updated at every timestep , moving obstacles can also be considered). and represent the and center positions of all obstacles detected by the robot (: where is the number of obstacles currently in range). and represent the and positions of the robot from timestep to timestep (future positions can be received from the MPC solution). represents the radius of the collision boundary of the robot, and represents the radius of the collision boundary of the obstacle. The collision boundary radius of the robot is calculated by using the major axis of the covariance uncertainty ellipse (

) estimated from RNN and is added to the radius or size of the robot itself (thus, we assume a more conservative collision boundary, which, combined with the slack variable—see below—provides some tolerance to ensure the planner does not fail while at the same time lowering the probability of collision). Note, from timestep

to timestep , is predicted by our RNN (see Section II-E).Without a slack variable and because of sensor measurement noise, the measured state of the robot may suddenly find itself very near or slightly inside the collision boundary of the obstacle and cause the solver to fail. To accommodate for this issue, Constraint v@ includes a slack variable to allow for some degree of constraint violation in our optimization problem. In other words, we effectively separate the covariance into a constraint and slack variable, where we tune the confidence attributed to the posterior estimate and break it into a nominal estimate, and a controllable slack parameter. Specifically, slack can be tuned through the weighing matrix, where a high cost for will ensure that the majority of solutions will not violate Constraint v@, while lower values allow for greater violation (the cost on the slack variable is largely dependent on user-experience during implementation).

No. | Constraint |
---|---|

I | |

II | |

III | |

IV | |

V |

### Ii-D Obstacle Detection

#### Ii-D1 Convolutional Neural Networks

To support the avoidance of incoming obstacles as our robot traverses the environment, we use a custom-trained CNN model for real-time object detection. More specifically, using Redmon et al.’s YOLOv3 [yolov3] fast CNN architecture because of its maturity (although other methods could be used, such as [xiao-semantic-mapping]), we trained two custom models. One localized brown boxes within an RGB camera frame using 1,500 hand-labeled images and the other localized black x

boxes in a mostly empty Gazebo environment using 300 images. Weights were initialized using YOLOv3’s default weights and trained for 5,200 epochs using stochastic gradient descent with a batch size of 64, momentum of 0.9, and learning rate of 0.001 for both models. We validated our model on labeled data withheld from the training data and we verified empirically that our object detector could successfully draw tight bounding boxes around our brown boxes (Fig.

2).#### Ii-D2 From Bounding Boxes to 3D Obstacles

For our end-to-end Gazebo simulation, we implemented simple classical feature detection over the simulated RGB and dense depth images to transform the bounding boxes from the object detector into useful 3D obstacles for the motion planner. The scheme described below assumes that features are cubes and that ALPHRED is directly facing all existing boxes. It is executed only once, at the beginning of the simulation. Note that instead of fully addressing the semantic mapping problem, we use simple placeholder computer vision components designed for our specific test scenarios; for now, we only utilize SLAM as training data for the RNN.

First, ORB features [orb-features] are extracted. Let and be the pixel coordinates of a single feature, and be its depth. Then, let be the body-to-spatial transformation, be the camera-to-body transformation, and be the intrinsics matrix. Then, the position of the feature in the spatial frame,

(a 3x1 vector) can be calculated as:

(6) | ||||

Next, for each bounding box captured by the CNN object detection process, we determine which features are in each bounding box. The size of the box is the maximum distance (in meters) between any two points. Half of that size then becomes the "radius" of the obstacle’s collision boundary (the MPC assumes that the collision boundary are circles).

We note that our classical feature detection approach is computationally efficient but also simple (i.e., not as robust). For example, most features exist near corners, where rounding errors could lead to a very different depth value. For the purpose of our end-to-end Gazebo simulation, we discarded any obstacle detections that were more than 5 meters away.

### Ii-E Recurrent Neural Networks for Learning Uncertainties

The RNN, shown in Figure 3, uses a combination of feedforward layers and simple RNN layers. The hidden layers all use ReLU activations. The network’s 18 inputs are the robot’s , , and positions. The next 15 inputs consist of the , , and

positions of the five closest tracked features at any given state. The four output layer neurons correspond to the four values of the robot’s

- covariance matrix, which is then used in Constraint V of the motion planning MPC. Unlike the hidden layers, the output layer uses a linear activation function because the outputs themselves are not restricted. Note that even though our MPC plans in only two dimensions, the inputs to the neural network are three-dimensional because the state estimation in our experiment is three-dimensional.We used the Mean Squared Error (MSE) as the loss function:

Here is the total number of timesteps, is the covariance matrix of the planer position computed by a SLAM system at timestep and is the covariance matrix predicted by the RNN. Conceptually, the covariance matrix is a matrix, but the implementation of the RNN treats it as a flattened matrix when it makes predictions and propagates error. Lastly, we note that covariance matrices are positive semidefinite by definition. However, the above training procedure does not constrain the output of the RNN to positive semidefinite; the outputs were indeed arbitrary 2 x 2 matrices. To account for this, we zeroed out off-diagonal elements and negated any negative diagonal elements.

Training data (robot position, position of tracked features, and covariance matrices) for the RNN was collected from running XIVO,^{3}^{3}3Code available: https://github.com/ucla-vision/xivo a simplified and modernized implementation of the SLAM system described in [jones_visual-inertial_2011], on time-synchronized RGB and IMU data collected from an Intel RealSense D435i mounted onto ALPHRED’s head. We collected four 40-second training sequences in total (using 100 epochs for training on the four sequences). The right side of Figure 2 displays tracked features and localization estimates from the collected data.

One key assumption that XIVO makes is that disturbances to the angular velocity and acceleration measurements (bias + noise) are a random walk (i.e. white, zero-mean, and Gaussian). This is not true for a walking robot, where each step produces a large periodic disturbance. Thus, XIVO’s generic motion model is best suited to a flying robot. However, to adapt XIVO for our quadruped, we limited the acceleration and angular rate measurements to "realistic" values and then "de-tuned" the filter by setting large bounds on expected IMU measurement noise. This hampered accuracy, but ultimately enabled convergence.

## Iii Experimental Results

In this section, we provide an overview of our robot model and its motion tracking controller, describe the training and testing results of the CNN and RNN neural networks, and then summarize our end-to-end results using our Gazebo simulation environment.

### Iii-a Robot Model and Motion Tracking Controller

The robot used in this study is ALPHRED from Hooks et al. [hooks2020ALPHRED], a full-sized quadrupedal robot that has unique kinematic configurations which enable several dynamic modes of operation as shown in Fig. 7 and Table II. Our path planner is tested on a highly accurate simulation of ALPHRED using Gazebo software [gazebo] (Fig. 4

). The robot is modeled as several interconnected rigid-bodies in PyBullet so that the state includes not only joint angular velocities, but sensor and actuator noise due to motor temperature. The camera model used is a standard perspective projection with the same intrinsics as the Intel RealSense camera used to collect RNN training data, but without distortions. ALPHRED uses an Extended Kalman Filter (EKF) that fuses kinematic encoder data with on-board IMU measurements to provide full state estimation

[bloesch2013state]. A Raibert-style controller [raibert1986legged] is used to track desired trajectories, where the input to the controller is desired planar velocities () and a desired yaw rate () in the body frame. The controller operates by planning footsteps using powerful heuristics based on velocity feedback and corrects velocity and orientation errors by adjusting the length of the limbs in support. Further details of the ALPHRED platform and its low-level motion tracking controller can be seen in

[hooks2020ALPHRED].### Iii-B Analysis of Learning Components

Training loss for both the CNN and RNN are shown in Figure (Fig. 5). CNN and RNN networks were trained for 5,300 and 100 epochs, respectively, but only a limited range was plotted for visualization. To avoid overfitting, we used cross-validation and ensured that the validation loss was close to the training loss during the training process for both networks. Additionally, we observed that as ALPHRED tracked more features (i.e., the corners of an obstacle), the RNN’s covariance estimates decreased. Conversely, as tracked features went out of view, estimates would increase. This is expected from the behavior of a visual-inertial odometry algorithm.

### Iii-C Gazebo Simulation

To test our proposed method, we used a custom Gazebo environment loaded with a high-fidelity model of our quadrupedal robot equipped with a Microsoft Kinect sensor. For localization, we used the motion tracking controller as described in Section III-A. Our 3D environment consisted of a box obstacle with the objective to command ALPHRED to move from its initial position at [0,0] to the goal position at [8,0]. We compared our method against a baseline approach, in which only an MPC was used for trajectory planning (with the obstacle explicitly hardcoded), and a naive approach for safer traversal, in which the robot’s radius was artificially inflated to twice the original size (from 0.7m to 1.4m).

In the illustrative example shown in Fig. 6, we observed that when using a classic MPC controller, which assumes that the robot’s state estimation is perfect, the resulting trajectory is too close to the obstacle and ALPHRED crashes (red). On the other hand, when using a conservative MPC controller, in which the assumed value of ALPHRED’s radius is twice its actual size, the resulting trajectory over-avoids collisions and ALPHRED moves slowly towards the goal point (blue). However, when using our full risk-aware MPC in this scenario, we observed that ALPHRED not only avoids collision, but executes a tighter trajectory than the conservative approach and requires less time to move to the goal. Note that the simulation was run on a laptop with an Intel Core i7 6700 HQ CPU and a NVIDIA GeForce GTX 970M GPU in real time with and .

## Iv Discussion and Future Work

Collision-free path planning within unknown and unexplored environments requires the daunting integration of several components, such as sensor processing, control algorithms, and uncertainty resolution, into a fast and online end-to-end framework. To this end, we propose an architecture that unifies these modalities which attempts to address the fundamental problem of uncertainty in Active SLAM. By inferring the future positional uncertainty for an MPC using an RNN, we can substitute typical belief space planners with a more computationally efficient approach. Our work can also pave the way towards using RNNs to address problems with temporal structure which are difficult for classic robotic algorithms.

Overall, our architecture addresses Active SLAM by combining MPC, SLAM, RNN, and CNN algorithms. We demonstrate that by inferring future positional uncertainties of the robot using our RNN prediction model, the robot can reach a goal state faster than when assuming a fixed uncertainty while still safely avoiding obstacles. This is significant because modeling uncertainties within a neural network framework, rather than belief space planning (i.e., POMDP), sufficiently shortens the computation time for hardware implementation. Future work will entail improvement of individual components in our architecture and modifying parts for complete hardware compatibility if necessary. For example, we would combine the classical feature detection and ALPHRED’s state estimation (described in Section II-D2) into a VIO algorithm.^{4}^{4}4We prefer a VIO algorithm over other SLAM algorithms because they can directly observe scale and because range sensors (e.g. LiDAR) are more expensive in both cost and computation and typically only produce sparse images. The motions of a walking robot should be sufficiently exciting, such that the VIO problem is observable. This would require us to modify generic VIO equations, such as the ones present in XIVO, by explicitly modeling a walking gait, expanding the size of the gait space, and recomputing the Jacobians to incorporate robot dynamics into visual modeling instead of assuming a simple random walk. Then the performance-limiting detuning and signal clipping described in Section II-E will become unnecessary. Finally, we also aim to replace the CNN + classical feature detection and unprojection with a modern semantic mapping algorithm, such as [xiao-semantic-mapping].

Future directions also include: (1) formulating our RNN to infer semantics and feed object-dependent margins to the planner (e.g., the robot can get close to safe objects but not to dangerous ones); (2) exploring additional inputs to the RNN, such as the estimated covariances of features or other states; (3) incorporating a state/control in the MPC rather than assume planar motion for more complex path planning; (4) comparing our formulations to belief space planners (e.g., stochastic MPC) as well as other state-of-the-art path planners; and (5) constraining the RNN training process such that outputs are guaranteed positive semidefinite. We believe that implementing the above modifications could lead to closing the gap in achieving the futuristic vision for complete autonomous robotic systems.

Parameter | Value |
---|---|

Degrees of Freedom | 12 (3 per leg) |

Weight | 17.9 kg |

Max Velocity | 1.5 m/s |

IMU | VectorNav 200 |

Camera | RealSense D435i |

Comments

There are no comments yet.