Deep Reinforcement Learning for Single-Shot Diagnosis and Adaptation in Damaged Robots

10/02/2019 ∙ by Shresth Verma, et al. ∙ ABV-Indian Institute of Information Technology & Management, Gwalior 0

Robotics has proved to be an indispensable tool in many industrial as well as social applications, such as warehouse automation, manufacturing, disaster robotics, etc. In most of these scenarios, damage to the agent while accomplishing mission-critical tasks can result in failure. To enable robotic adaptation in such situations, the agent needs to adopt policies which are robust to a diverse set of damages and must do so with minimum computational complexity. We thus propose a damage aware control architecture which diagnoses the damage prior to gait selection while also incorporating domain randomization in the damage space for learning a robust policy. To implement damage awareness, we have used a Long Short Term Memory based supervised learning network which diagnoses the damage and predicts the type of damage. The main novelty of this approach is that only a single policy is trained to adapt against a wide variety of damages and the diagnosis is done in a single trial at the time of damage.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

One of the motives of introducing was to provide a safe method of access and operation in environments that are hazardous and unreachable to humans. But very often, these environments destabilize or damage the robot partially, often impairing them, and thus leading to a mission failure or significant drop in performance. This is especially critical for robots deployed in manufacturing industries and warehouses (Khatib, 2005), search and rescue missions (Murphy, 2004) and disaster response (Nagatani et al., 2013). Although this situation of partial damage is tackled in humans or animals by their learning of alternate ways to perform the action, this kind of learning in robots requires, what we call, intelligence. Hence, the objective while designing robotic devices is not just restricted to avoiding or tackling obstacles, it also includes the adaptation of the agent in presence of adversaries, both in the form of internal damages as well as external effects.

Deep Reinforcement learning (Deep RL) has been shown to be effective in modeling such navigation problems because of both its online and offline learning capabilities in high dimensional search spaces (Chatzilygeroudis et al., 2018; Hwangbo et al., 2017; Pinto et al., 2017a; Lobos-Tsunekawa et al., 2018). In the context of adapting to damages, offline learning would mean training a robust policy before the robot is deployed while online capabilities suggest learning to adapt at the time of damage. But the environments and agents in these situations are very complex in nature, as a result of which, retraining the RL policy every time a change occurs in either of them is highly impractical. This points to the necessity of having an efficient control architecture which can help the agent adapt in varying adversarial conditions.

To implement this, several approaches have tried to learn multiple policies at training time and then choosing from them at the time of damage. However, models which have made progress in this domain require reset of the agent to initial state (Cully et al., 2015), or multiple hardware trials are to be performed to help the agent recover or adapt (Cully et al., 2015; Bongard and Lipson, 2004; Koos et al., 2013). Although this is intuitive, it is inefficient considering the overhead of choosing from a set of high performing gaits. To make a smart recovery decision, an alternative method can be for agent to understand the damage first and then use that damage awareness to act optimally.

We thus propose Damage Aware-Proximal Policy Optimization (DA-PPO), combining damage diagnosis with deep reinforcement learning. The control architecture first performs damage diagnosis on multiple damage cases using a Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997)

, based supervised learning network. It uses the difference between the gaits of a (simulated) healthy and a damaged robot as input and classifies the damage that has occurred, if any. The data of the diagnosed damage is combined along with the current observation vector to create an augmented observation space, which contains information of both state space observation as well as damage. This augmented observation is used to train our RL model, which is optimized using Proximal Policy Optimization (PPO)

(Schulman et al., 2017). The trained model shall be able to understand the damage that has occurred and choose its gait accordingly. Since, only a single policy is learnt, there is no overhead of storing and choosing between multiple policies, making our algorithm effective in real time.

The major objectives of our work are:

  1. To create a deep reinforcement learning based control architecture for enabling locomotory agents to accomplish mission-critical tasks even in the presence of single or multiple internal damages.

  2. To optimize the control architecture so that the agent adapts its gait in a single hardware trial.

2. Related Work

2.1. Automated Recovery in Robotics

Preliminary work on automated recovery in robots were based on evolutionary algorithms and generally divided the process into two phases - damage estimation and recovery. This necessitated the need for a healthy robot’s simulation to always be available to the physical robot, so as to estimate the damage. This estimation would then help create a neural controller, during the exploration or recovery phase, which can handle the damage. The neural controller is passed to the robot and thus used for adaptation. The algorithm introduced in

(Bongard and Lipson, 2004), was one of the first to propose an automatic and continuous information flow between a physical robot and its simulation wherein the robot provides its current state information. The simulator updates its own state using this information and provides the robot with neural controllers to handle its state or damage. The major advantage was that the creation of recovery method didn’t have to be performed directly on the physical robot and thus the number of trials required to recover was drastically reduced.

Extending this work, Koos et al. (Koos et al., 2013), also created a self diagnosis model. The main difference between the two works is the use of an undamaged self-model of the robot to find out behaviors rather than constantly updating it based on diagnosis. Although the intuition behind these are correct and applicable even today, the use of evolutionary algorithms make these methods inefficient.

2.2. Map-based Algorithms for Adaptation

Algorithms based on behavior performance maps (Cully et al., 2015; Chatzilygeroudis et al., 2018) rely on the assumption that knowledge of the cause of damage i.e., a proper diagnosis report is not necessary to recover from the damage. Rather than considering two separate phases for damage diagnosis and recovery algorithm generation, Cully et al. (Cully et al., 2015), proposed a method inspired from animals, who perform trial and error to determine the least painful alternate gait in the presence of injury. The approach put forward in this work, Intelligent Trial and Error (ITE), relies on a behavior-performance map space. This map enables the robot to try multiple behaviors which are predicted to perform well. Based on the trials conducted and their results, the estimated performance values are also updated in the map. The process converges when the best behavior possible has been estimated. Even when the damage is absent, the high performing behaviors are expected to be useful.

The implementation of this idea uses gaussian process (Rasmussen and Williams, 2005), and bayesian optimization procedure (Mockus, 1989; Borji and Itti, 2013), to choose which gaits or behaviors to try at the time of damage by maximizing performance function from the behavior-performance space. The selected gait is tested on the robot and its performance is recorded which then helps update the value of expected performance of that gait. This select-test-update loop continues till the right behavior is obtained.

Inspired by ITE, Chatzilygeroudis et al. (Chatzilygeroudis et al., 2018), proposed a more optimized version of the algorithm. Reset free trial and error (RTE) focuses on the fact that some of the high performing policies which work on an intact robot should also work on a damaged robot, which is true mainly in complex robotic systems like humanoids or multi-legged robots. Similar to ITE, RTE pre-computes and generates a behavior performance map using MAP-ELITES (Mouret and Clune, 2015). It learns the robot’s model, especially when it is damaged and uses Monte Carlo Tree Search (Chaslot et al., 2008) to compute the next best action for the current state of robot. Also, the method uses a probabilistic model to incorporate uncertainty of predictions and uses this data to correct the outcome of each action on the damaged robot (Silver et al., 2016; T, 2013). This culmination of algorithms makes sure that there is no reset required when a damage occurs.

A significant drawback in the previous two methods is the huge complexity overhead due to the use of gaussian process and also the inability to work on dynamic unknown terrains.

2.3. Handling Environmental Adversaries

Adversarial forces on robots are not limited to physical damages. There could also be environmental factors which hinder normal robotic locomotion. Several methods have been proposed to deal with these kind of damages. Robust Adversarial Reinforcement Learning (RARL) (Pinto et al., 2017b), concentrates on ensuring stability of an agent in the presence of an adversary, which is trying to destabilize it. It is based on the assumption that environmental changes, such as change in coefficient of friction of floor, between training and testing can also be modelled as an adversary acting on some part of the agent’s body.

The algorithm is basically reduced to a min-max game where the adversary tries to minimize the reward of the concerned Markov Decision Process (MDP) and the protagonist tries to maximize it. The method of achieving this, as proposed, is to alternate between training of policies for both adversary and protagonist for a fixed number of iterations until convergence.

Another approach, introduced in (Kume et al., 2017), is based on enabling adaptation to both environmental adversaries as well as physical or internal damage of robot. The major difference between their work and previous works like ITE and RTE is the existence of a multi-policy mapping for a single behavior in place of a single policy. Map-based Multi-Policy Reinforcement Learning (MMPRL), proposed in this work, trains many different policies by collaborating a behavior-performance map and the concepts of deep reinforcement learning. It aims to search and store these multiple policies while maximizing expected reward. MMPRL saves all possible policies with different behavioral features, making it extremely fast and adaptable.

2.4. Domain Randomization

Some recent works have also experimented with randomization in simulation environments through domain and dynamics randomization (Tobin et al., 2017; Peng et al., 2017), so as to bridge the gap between simulation and real world. The idea is to create numerous variations in the simulation environment so that real world appears as just another sample from a rich distribution of training samples. In (Tobin et al., 2017)

, the authors have experimented on object localization for the purpose of grasping in cluttered environment. They have shown impressive results, randomizing in the visual domain to transfer learning from simulation to real world without requiring real world training images. On the other hand, in

(Peng et al., 2017), the authors have randomized the dynamics of the environment such as mass, damping factor, friction coefficient and have shown that the policy learned in such a dynamic environment is quite robust to calibration errors in the real world.

While most map-based methods are able to adapt over a wide range of damages, their computational overhead in creating the behaviour-performance map is a significant drawback. In ITE and RTE, the complexity is further increased by the gaussian process computations. Moreover, all these approaches require multiple hardware trials for adapting to a damage. We try to incorporate domain randomization approach in the context of damages so that damages in the real world are just another variation of training samples. Moreover, we further improve this approach by presenting a single hardware trial control loop for diagnosing the damage.

3. Approach

3.1. Overview

We consider the following scenario: A robot has been damaged while in a remote and hazardous environment. We require the robot to reach the destination by adapting its gait so as to overcome the damage. Rather than making the agent dependent on a pre-computed set of high performing gaits, it should be able to identify and adapt to its damage autonomously.

Thus we propose a self-diagnose network which can predict the type of damage that has occurred in the structure of the robot. With this damage awareness, we use an augmented observation space for learning a well-performing policy through a modified version of Proximal Policy Optimization (PPO) which we call Damage Aware-Proximal Policy Optimization (DA-PPO). In our work, we assume that internal damages, unlike environmental adversaries, do not keep changing constantly. Thus, we only need to perform the self-diagnosis step for determining damage class whenever the reward drastically drops below a certain threshold, indicating that damage has occurred.

3.2. Self-Diagnose Network

In the min-max based game approach put forward in RARL (Pinto et al., 2017b), the technique fails to generalize over changing damages from adversary at every time step. This is actually a tough problem since the policy has no feedback mechanism to judge the performance of action taken in the last time step. Thus we propose a self-diagnose network, an LSTM (Hochreiter and Schmidhuber, 1997) based predictive model, which tries to classify the type of damage that has occurred in the robot using continuous feedback from its gait. In (Bongard and Lipson, 2004), the authors have used the difference between the behaviours of simulated robot and the physical robot in terms of forward displacement to classify damages. We extend this idea by measuring the difference in sensor values between the two for a fixed number of time steps. This results in a time series and our problem is reduced to classifying damage from this data. More specifically, the on-board computer of the robot can run a simulation of a healthy robot and compare its gait with the actual steps taken. Based on the difference between the two, the network can diagnose the class of damage (see Fig. 1).

Result: An array with collected samples
Initialize:
Load an expert policy trained on healthy robot
Run parallel threads
for  to  do
       Set a random seed
       Initialize environments , for healthy and damaged robots with the same seed value
       for  to  do
             .applyDamage(damage_class)
             for  to  do
                   get action from predefined policy
                   = policy_fn()
                   = policy_fn()
                   do simulation step in both environments
                   , = .step()
                   , = .step()
                  
             end for
            collect (-)
       end for
      
end for
Concatenate collected samples
Algorithm 1 Sample collection

Since this time series is multivariate and high dimensional, we use LSTM hidden units which are powerful and increasingly popular models for learning from sequencial data (Greff et al., 2017). Algorithm 1 describes in detail the sample collection process. The healthy and damaged robot environments are represented by and respectively. Both the environments are run from the same initial state and the difference between their observation spaces is collected continuously for a fixed number of time steps (represented here as ). For any environment, this results in a matrix of size (where is the observation space size for that environment) and this represents a single data point. These data points act as training data, for which labels are the corresponding damage classes upon which the simulation was run. The whole process is repeated number of times to get multiple data points. Note that represents an expert policy which has been pretrained on a healthy robot.

The network is trained using data obtained through the sample collection step explained in Algorithm 1. This step is also parallelized and thus doesn’t act as a bottleneck for the entire algorithm. The self-diagnose network, represented by , can be accessed on demand to determine damage class within a single trial as shown in Fig. 1.

3.3. Encoding of Damage Indicators

The self-diagnose network predicts the damage class of the robot which can act as an additional state information about the environment. We thus concatenate it with the observation space of the original robot to form, what we call, an augmented observation space.

This poses a necessity to encode the output of the classifier so that the policy efficiently learns various gaits in accordance with the damage. If a random encoding scheme is used for creating the augmented observation space, it results in the algorithm perceiving the encoding as noise, and completely ignoring it during policy learning. We have thus used partial one hot encoding and it is observed to work well in practice as the damage information is not lost during training.

In our experiments, we have limited the number of damages that can occur simultaneously to two and have taken the assumption that only one damage can occur on a limb at a time. The number of damage classes can thus be calculated as the sum of no damage case, single damage cases and multiple damage cases occurring at various limbs. This is given by:

(1)

where represents the number of limbs in the agent and represents the number of different damage types considered.

The encoded vector is of length where the damage of limb is represented by the values at indices and in the encoded vector. Thus, we have a tuple of size 2 associated with each limb where represents no damage, represents damage type 1 and represents damage type 2 at the limb. Note that the tuple can be used if we remove the assumption that two types of damages can’t occur together at a single limb. Furthermore, the tuple size can be increased to model more types of damages.

Figure 1. Control Architecture

3.4. Proximal Policy Optimization

Since our task is that of continuous action control, we formulate it as a reinforcement learning problem, starting from initial state , choosing a series of action and obtaining state and reward at the timestep while maximizing the expected sum of rewards by changing the parameter of the parameterized stochastic policy . But the use of large scale optimization is less widespread in continuous action spaces. An attractive option for such problems is to use policy gradient algorithms (Silver et al., 2014). Proximal Policy Optimization is a simplified version of Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a). It improves upon the stability of policy gradient methods by allowing multiple updates on minibatch of on-policy data. This is implemented by limiting the KL divergence between updated policy and the policy from which the data was sampled. TRPO uses a hard optimization constraint for achieving the same but is computationally expensive to compute. PPO approximates TRPO by using a soft constraint. The original paper (Schulman et al., 2017)

proposes two methods for implementing this soft constraint: an adaptive KL loss penalty and using a clipped surrogate loss function.

PPO represents the ratio between new policy and old policy as:

(2)

The objective functions can be (Schulman et al., 2017):

(3)
(4)

Both these objective functions stabilize training by constraining the policy changes at each step, thus approximating the gradient to a local value, so that large steps are not taken between iterations. Additionally, we use Generalized Advantage Estimation (GAE) (Schulman et al., 2015b) for computing the advantage function

. In our implementation of PPO, we have used a combination of both clipping loss and adaptive KL penalty for locomotion tasks. The hyperparameters for the same are mentioned in Section

4.3.

3.5. Damage Aware Proximal Policy Optimization

With the self-diagnose network in place, we can now use the policy learning algorithm on augmented observation space which encapsulates both environment state (through observation vector) and damage awareness (through damage encoding vector). We use the PPO algorithm for policy learning from the augmented observation space where is the observation at timestep , is the action taken according to policy and is the environment in which damage has occurred (see Fig. 1). Note that we only run self-diagnose network when reward during a run falls below a certain threshold. At other times, the damage is considered to be the same as diagnosed in the last run.

4. Experimental Setup

4.1. Simulation Setup

To evaluate our approach, we have conducted experiments on two environments, Ant, a quadrupedal locomotory robot and Hexapod, a six-legged locomotory robot. We have used OpenAI gym toolkit (Brockman et al., 2016), for performing simulations in combination with MuJoCo physics engine (Todorov et al., 2012). The Ant is an already implemented environment in OpenAI Gym while the Hexapod is implemented using the configuration and model described in ITE (Cully et al., 2015).

The two environments used in our experiments are discussed below:

Ant (Quadrupedal bot)

: Ant is a simple quadrupedal robot with 12 degrees of freedom (DoF) and 8 torque actuated joints. The joint has maximum flex and extension of 30 degrees from their original setting and also has a force and torque sensor. The observation includes features containing joint angles, angular velocity, the position of all structural elements with respect to the center of mass and force and torque sensor outputs of each joint forming a 111-dimension vector. The target action values are the motor torque values which are limited in the range -1.0 to 1.0. We limit an episode to at most 1000 timesteps and the episode will end whenever it crosses this limit or robot falls down on its legs or jumps above a certain height. The reward function is defined as follows:

(5)

where is the covered distance of the robot in the current time step since the previous time step, is the survival reward, which is 1 on survival and 0 if the episode is terminated by the aforementioned conditions. The variable is the number of legs making contact with the ground, are the target joint angles (the actions), and is the weight of each component with = 0.5, = 0.5.

Hexapod: There are three actuators on each leg of the Hexapod. In the neutral position, the height of the robot is 0.2 meters. In addition to this, the actions are taken to be the joint angle positions of all 18 joints, which ranges from -0.785 to 0.785 radians. As the observation space of the agent, a 53-dimension vector is given as input which consists of the position and velocity of all the joints as well as he center of mass. Along with this, the observation space contains boolean values from touch sensors which indicate whether a leg is making contact with the ground or not. Again, we limit an episode to be at most 1000 timesteps and the episode will end whenever the robot falls down on its legs or jumps above a certain height or crosses the time limit.

The reward function R is defined as follows:

(6)

where is the covered distance of the robot in the current time step since the previous time step, is the survival reward, which is 0.1 on survival and 0 if the episode is terminated by the aforementioned conditions. The variable represents the number of legs making contact with the ground, is the vector of squared sum of external forces and torques on each joint, are the target joint angles (the actions), and is the weight of each component with = 0.03, = 0.0005, and = 0.05.

4.2. Damage Simulation

Since both the environments considered in our experiments are simulated in OpenAI gym, the damages are implemented by changing the xml files of the 3D models. This can be done on the fly without affecting parallely running experiments. In our work, we have simulated broadly two kinds of internal damages which are

  1. Jamming of joint such that it can’t move irrespective of the amount of torsional force applied by the motor at that joint.

  2. Missing toe, i.e., lower limb of the robot breaks off.

(a) Ant damage scenario 1
(b) Ant damage scenario 2
(c) Hexapod damage scenario 1
(d) Hexapod damage scenario 2
Figure 2. Some of the damage scenarios in Ant and Hexapod. Yellow and red circles represent jammed joint and missing limb damage types respectively.

In MuJoCo environments, these damages are implemented as follows:

Ant Environment

  • Jamming of joint is modelled by restricting the angle range of concerned joint to -0.1 to 0.1 degrees from the default value of -30 to 30 degrees.

  • Missing toe is modelled by shrinking the lower limb size to 0.01 from the original value of 0.8.

Hexapod Environment

  • The original angle range of hexapod is -45 to 45 degrees. This is restricted to -0.1 to 0.1 when jamming of joint is modelled.

  • Missing of any of the 6 toes in hexapod is modelled by reducing the the lower limb size to 0.01 instead of 0.07 in healthy robot.

  • There are touch sensors on each lower limb of the hexapod. Thus whenever a lower limb breaks off we consider that the touch sensor corresponding to it stops giving any signal and its output is considered to be 0.

(a) Ant Environment
(b) Hexapod Environment
Figure 3. Training curve comparision between DA-PPO and PPO Unaware in both Ant and Hexapod Environments

4.3. Hyperparameter Details

For the self-diagnose network, we take as input a matrix of size and this is followed by an embedding layer with embedding size 512 and an LSTM layer with 32 hidden units. After this, we stack three dense layers of size 256, 128, 64 along with dropouts, so as to reduce overfitting. The output layer uses softmax

as activation so that it outputs class probabilities. The loss function and optimizer used are

categorical crossentropy and adam respectively. For Ant and Hexapod environments, the possible classes range from 0 to 32 and 0 to 72 respectively as calculated from Equation 1.

As for the policy learning using PPO, we use the implementation from (Guadarrama et al., 2018)

. For both value function and policy function, we use the same network configuration having hidden layer sizes as 100, 200, 100. Adam optimizer was used for both the neural networks. The GAE gamma value is taken as 0.995 and lambda as 0.98. The clipping range is kept at 0.2 and adaptive KL target is initialized with 0.01. Adam learning rate and KL target value are adjusted dynamically during the training. Moreover, we trained the value function on the combination of current batch and previous batch to stabilize training.

5. Results and Discussion

We evaluate the performance of our approach within the two elements involved : (1) Self-Diagnose network for predicting class of damage (2) DA-PPO, which learns to adopt a policy given that a particular damage has occurred.

5.1. Self-Diagnose Network

For the comparison of performance, we consider different number of rollouts (amount of data to train on), length of history to look back into (timesteps) and what to give as observation data, i.e., our proposed approach of using difference of observations between healthy and damaged run or the observations of only damaged run. Table 1 summarizes the validation accuracy across these parameters. We can observe that classifying using fewer timesteps results in faster diagnosis but at the expense of accuracy. Moreover, classification using the difference between observation vectors as input outperforms the use of observations from damaged run only in all the cases. However, if there is a constraint on computation power of the on-board computer of the robot, the latter approach can be preferred over the former one.

Classification Accuracy in Ant Environment
Timesteps Method Number of Rollouts
1000 2000 7000
10 A 78.21.11 81.40.6 82.40.87
B 81.242.88 85.21.2 84.330.72
30 A 82.171.7 87.11.8 88.171.3
B 83.622.03 90.80.9 91.51.067
50 A 83.110.8 90.171.2 92.831.8
B 84.291.21 92.61.83 96.81.48
Classification Accuracy in Hexapod Environment
Timesteps Method Number of Rollouts
1000 2000 7000
10 A 22.20.6 33.11.23 44.60.9
B 32.60.8 38.51.1 47.81.13
30 A 60.51.9 62.91.8 79.671.02
B 65.451.2 69.171.11 82.61.28
50 A 65.231.3 69.71.1 82.21.8
B 68.831.8 72.171.29 87.60.86
Table 1. Classification accuracy in predicting damage class in Ant and Hexapod environment with varying number of timesteps and rollouts. Method A represents using observations of damaged run only as time series and method B represents using difference of observations between healthy robot and damaged robot as time series.
Figure 4. Forward reward comparison between DA-PPO and PPO-Unaware across different damage classes in Hexapod

5.2. Damage Aware-Proximal Policy Optimization

We start by creating a baseline model for comparison of performance. We define a model using PPO policy which is trained on experiments having damaged robot but without augmented observation space (i.e., without explicit knowledge of damage class), and call it PPO-Unaware. This is analogous to having a policy implementing domain randomization in damage space but without having a feedback loop. Our proposed model, which uses Damage Aware PPO policy, is called DA-PPO. The performance metric used is the forward reward of the agent, averaged across all the damage classes. Fig. 3 shows the training curve comparison between PPO-Unaware and DA-PPO in Ant and Hexapod environments (see Fig. 2(a), 2(b) ). DA-PPO shows a 60.7% improvement in average forward reward in Ant environment while in Hexapod environment, there is a 31.5% reward gain over PPO-Unaware.

Figure 5. Forward reward Comparison between PPO-Unaware and DA-PPO across different grouped damage classes in Ant. D1 and D2 refers to single jammed joint and single missing toe damages. A and O represents that damage type and are present in adjacent (A) or opposite (O) limbs.

For the Hexapod environment, we also use the concept of curriculum learning (Bengio et al., 2009), by progressively training on cases which are more difficult. We implement this by increasing the percentage of damage classes in training examples and also progressively increasing the severity of damages (by including multiple damages). In Fig. 2(b), each piece-wise curve represents a stage (I, II, III or IV) in the curriculum learning process. I has 100% healthy cases, II has 60% healthy and 40% single damage cases, III has 70% healthy and single damage cases and 30% multiple damage cases and IV has all damages equally likely. In this way, we were able to encourage a faster learning progress.

We also do a per class performance analysis of the two approaches discussed across various damage classes in both Ant and Hexapod (see Fig. 4, 5). In the Ant environment, DA-PPO performs better in 82.84% of damage classes when compared to PPO-Unaware. Comparing between various damage classes, DA-PPO is seen to adapt really well (in terms of reward improvement over PPO-Unaware) when damages occur on opposite limbs as compared to damages occurring on adjacent limbs. In the Hexapod environment, DA-PPO performs better in 72.6% of damage classes when compared to PPO-Unaware (see Fig. 4). This shows that being damage aware results in significant improvement in performance in presence of adversaries.

6. Conclusions

We have proposed and implemented a two-part control architecture for robotic damage adaptation. This is particularly useful when robots are used in hazardous environments, where human intervention is nearly impossible.

Our approach enables the agent to autonomously identify and understand the damage that has occurred in its physical structure and adapt its gait accordingly. Since the ultimate goal is the creation of intelligent machines, understanding the damage is as important as adapting from it, which has often been overlooked in past works.

On comparison with map-based approaches, DA-PPO doesn’t require any map generation phase and thus the initial training time is much less. This is also enhanced by the fact that our approach adapts to the damage in a single trial itself, without trying multiple well-performing gaits or without having to be reset to the initial state to perform the trial.

Our work can also be easily scaled to a larger number of damage classes. Since no differentiation is made between the cause of damage, adaptation is possible in case of both morphological and external damages. Also, in the case of unknown damages, the network is expected to predict a damage class which resembles the actual damage the most and try to choose a gait accordingly. This implies a very low rate of complete failure. We intend to study more on this in a future work.

Future work shall be focused on extending the algorithm to handle environmental adversaries, which is much desirable since real-world environments are not predictable. We also intend to work on DA-PPO for complex and dynamic environments, using SLAM (Durrant-Whyte and Bailey, 2006). Finally, we plan to extend our method and prove its effectiveness by applying it on a physical robot.

References