Subgoal-based Reward Shaping to Improve Efficiency in Reinforcement Learning

04/13/2021
by   Takato Okudo, et al.
0

Reinforcement learning, which acquires a policy maximizing long-term rewards, has been actively studied. Unfortunately, this learning type is too slow and difficult to use in practical situations because the state-action space becomes huge in real environments. Many studies have incorporated human knowledge into reinforcement Learning. Though human knowledge on trajectories is often used, a human could be asked to control an AI agent, which can be difficult. Knowledge on subgoals may lessen this requirement because humans need only to consider a few representative states on an optimal trajectory in their minds. The essential factor for learning efficiency is rewards. Potential-based reward shaping is a basic method for enriching rewards. However, it is often difficult to incorporate subgoals for accelerating learning over potential-based reward shaping. This is because the appropriate potentials are not intuitive for humans. We extend potential-based reward shaping and propose a subgoal-based reward shaping. The method makes it easier for human trainers to share their knowledge of subgoals. To evaluate our method, we obtained a subgoal series from participants and conducted experiments in three domains, four-rooms(discrete states and discrete actions), pinball(continuous and discrete), and picking(both continuous). We compared our method with a baseline reinforcement learning algorithm and other subgoal-based methods, including random subgoal and naive subgoal-based reward shaping. As a result, we found out that our reward shaping outperformed all other methods in learning efficiency.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 8

page 10

page 11

page 12

04/13/2021

Reward Shaping with Dynamic Trajectory Aggregation

Reinforcement learning, which acquires a policy maximizing long-term rew...
02/20/2020

oIRL: Robust Adversarial Inverse Reinforcement Learning with Temporally Extended Actions

Explicit engineering of reward functions for given environments has been...
04/02/2018

Recall Traces: Backtracking Models for Efficient Reinforcement Learning

In many environments only a tiny subset of all states yield high reward....
05/25/2018

Visceral Machines: Reinforcement Learning with Intrinsic Rewards that Mimic the Human Nervous System

The human autonomic nervous system has evolved over millions of years an...
03/11/2021

Generalizable Episodic Memory for Deep Reinforcement Learning

Episodic memory-based methods can rapidly latch onto past successful str...
12/05/2019

Learning Human Objectives by Evaluating Hypothetical Behavior

We seek to align agent behavior with a user's objectives in a reinforcem...
06/21/2019

Split Q Learning: Reinforcement Learning with Two-Stream Rewards

Drawing an inspiration from behavioral studies of human decision making,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Reinforcement learning(RL) can acquire a policy maximizing long-term rewards in an environment. Designers do not need to specify how to achieve a goal; they only need to specify what a learning agent should achieve with a reward function. A reinforcement learning agent performs both exploration and exploitation to find how to achieve a goal by itself. It is common for the state-action space to be quite large in a real environment like robotics. As the space becomes larger, the number of iterations to learn the optimal policies exponentially increases, and the learning becomes too slow to obtain the optimal policies in a realistic amount of time. Since a human could have knowledge that would be helpful to such an agent in some cases, a promising approach is utilizing human knowledge [ijcai2018-817], [DBLP:conf/atal/HarutyunyanBVN15a], [8708686]. Wang and Taylor’s approach transfers a policy by requesting that a human selects an action for a state during the learning phase [DBLP:conf/aaaiss/WangT16]. Griffith et al. [NIPS2013_5187] used interactive human feedback as direct policy labels.

The reward function is the most related to learning efficiency. Most difficult tasks in RL have a sparse reward function [Yusuf2018]. The agent is not able to evaluate its policy due to it and to learn the optimal policy. In contrast, learning speeds up when the reward function is dense. Inverse reinforcement learning (IRL) [Ng:2000:AIR:645529.657801], [10.1145/1015330.1015430] is the most popular method for enriching the reward function. IRL uses an optimal policy to generate a dense reward function. Recent studies have utilized optimal trajectories [NIPS2016_6391],  [10.5555/1620270.1620297]. There is the question of the cost for the teacher in providing the optimal trajectories or policies. Humans sometimes have difficulty providing these because of the skills they may or may not have. In particular, in robotics tasks, humans are required to have robot-handling skills and knowledge on the optimal trajectory. We focus on the knowledge of subgoals because there are no requirements for robot-handling skills. Humans need only to think of a few subgoal states on the optimal trajectory in their minds. Moreover, trajectories that fail can be utilized. Potential-based reward shaping is able to add external rewards while keeping the optimal policy of the environment [Ng+HR:1999]. It is calculated as the difference between the real-number functions (potential function) of the previous and current state. Generally, a naive idea of the potential function for utilizing the knowledge of subgoals is high potential only for subgoal states. However, the method cannot work well as shown in Section VI, so it is not easy to incorporate the knowledge of subgoals over potential-based reward shaping.

In this paper, we propose a subgoal-based reward shaping, that accelerates learning with the knowledge of subgoals with the original optimal policy remaining. The remainder of this paper is organized as the followings. Section II reviews related work on RL and similar studies to our reward shaping with subgoal knowledge. The preliminaries including notations and algorithms for RL and potential based reward shaping are described in Section III. Then, we explain the definition of subgoals, a subgoal-based reward shaping algorithm we proposed in Section IV. The next Section V gives an overview of user study in which we obtained subgoal knowledge from participants. The setting and results of various experiments we conducted for evaluation of our proposed method and descriptions on three domains, four-rooms, pinball, pick and place tasks in Section VI. In Section VII, discussions on subgoals given by participants, hyper parameters tuning and others are discussed. The conclusion is provided in Section VIII.

Ii Related Work

Ii-a Human Knowledge for Reinforcement Learning

There are many studies utilizing human knowledge to RL; trajectory, policy, preference, action, feedback, and subgoal. From policy or trajectory which human provides, IRL infers an unobserved reward function [ARORA2021103500]. It is known to be difficult to design the reward functions because of an unspecified reward [DBLP:journals/corr/AmodeiOSCSM16] and a reward hacking [10.5555/1671238]. The unspecified reward is leaving out important aspects in reward design. The reward hacking is short of reward design for penalizing all possible misbehavior. Since the designer only provide the policy or trajectories without defining the reward function directly, the IRL overcomes these difficulties. The reward function is modeled as a linear sum of weighted features, and is acquired by a margin-based optimization [Ng:2000:AIR:645529.657801] or an entropy-based optimization [NIPS2016_6391]. Since the IRL generates a rich reward function, it is also a powerful solution when an environment has a sparse reward. The approach is same to ours. The difference is that we use only some specific states as subgoal.

In RL with human preferences [Christiano2017], a human teacher compares a pair of trajectory segments and select the better one than the other one. The method learned a policy without access to the environmental rewards and the human preference predictor.

In dataset aggregation(DAGGER) [pmlr-v15-ross11a], a human expert selects an action in a state which an agent selects.

In the field of interactive reinforcement learning, a learning agent interacts with a human trainer as well as the environment[doi:10.1080/09540091.2018.1443318]. The TAMER framework is a typical interactive framework for reinforcement learning[KCAP09-knox]. The human trainer observes the agent’s actions and provides binary feedback during learning. Since humans often do not have programming skills and knowledge on algorithms, the method relaxes the requirements to be a trainer. We aim for fewer trainer requirements, and we use a GUI on a web system in experiments with navigation tasks.

RGoal architecture by Ichisugi et al. [10.1007/978-3-030-30484-3_9] deals with recursive subgoals in HRL. The method solves a MDP with an augmented state space in which each subgoal has a state space. In multitask settings, the method sped up the learning by sharing subroutines between tasks. Murata et al. [10.5555/1295303.1295360] used subgoals for accelerating learning in RL. A reward is generated at achievement of each subgoal, there is a -function for each subgoal. The policy select an action according to the weighted sum of -functions of subgoals and an original -function.

Ii-B Subgoals for Reinforcement Learning

We consider the hindsight experience replay(HER) algorithm [NIPS2017_7090], which uses subgoals. HER regards a failure as a success in hindsight, and generates a reward for the failure. The algorithm interprets a failure as the process to achieve target, and it is really similar to a subgoal. HER accelerate the learning in a multi-goal environment. We focus on a single-goal environment.

Hierarchical reinforcement learning(HRL) utilizes subgoals. The option framework is major in the field of HRL. The framework of Sutton et al. [Sutton:1999:MSF:319103.319108]

was able to transfer learned policies in an option. An option consists of an initiation set, an intra-option policy, and a termination function. An option expresses a combination of a subtask and a policy for it. The termination function takes on the role of subgoal because it terminates an option and triggers the switching to another option. Recent methods have found good subgoals for a learner simultaneously with policy learning 

[Bacon:2017:OA:3298483.3298491], [10.5555/3305890.3306047]. The differences with our method are whether the policy is over abstract states or not and whether rewards are generated. The framework intends to recycle a learned policy, but our method focuses on improving learning efficiency.

Ii-C Reward Shaping

Reward shaping has been studied actively. Marom and Rosman [Marom2018]

proposed the reward shaping based on Bayesian methods called belief reward shaping(BRS). The expected value over the estimated probability distribution of an environmental reward over a state and an action is used as shaping rewards. They proved that the policy learned by Q-learning augmented with BRS is consistent to an optimal policy over original MDP.

The landmark-based reward shaping of Demir et al. [Demir2019]

is the closest to our method. The method shapes only rewards on a landmark using a value function. Their study focused on a POMDP environment, and landmarks automatically become abstract states. We focus on a Markov decision process (MDP) environment, and acquire subgoals from human participants.

Reward shaping in HRL has been studied in [Gao2015, Li2019]. Gao et al. [Gao2015] showed that potential-based reward shaping remains policy invariant to the MAX-Q algorithm. Designing potentials every level is laborious work. We use a single high-level value function as a potential, which reduces the design load. Li et al. [Li2019] incorporated an advantage function in high-level state-action space into reward shaping. The reward shaping method in [Paul2019] utilized subgoals that are automatically discovered with expert trajectories. The potentials generated every subgoal are different.

Potential-based advice is reward shaping for states and actions[Wiewiora2003]. The method shapes the Q-value function directly for a state and an action, and it makes it easy for a human to give advice to an agent regarding whether an action in an arbitrary state is better or not. Subgoals show what ought to be achieved on the trajectory to a goal. We adopted the shaping of a state value function.

Harutyunyan et al. [Harutyunyan2015] has shown that the Q-values learned by arbitrary rewards can be used for potential-based advice. The method mainly assumes that a teacher negates the agent’s selected action. The method uses failures in trial and errors. In contrast, our method uses successes.

Iii Preliminaries

Iii-a Reinforcement Learning

Reinforcement learning is a framework for acquiring a policy maximizing future rewards through interactions between an agent and an environment. This works under a Markov decision process(MDP). A MDP is represented as a tuple , where is a finite set of states, is a set of actions of , is a state transition function, is the discount factor, and specifies a reward function. An agent has a policy and a value function , . There are two methods for learning a optimal policy, the value-based method [Szepesvari:2010:ARL:1855083] and the policy gradient-based method [NIPS1999_1713]. The value-based method indirectly learns a policy by estimating the optimal value function. The policy gradient-based method directly learns the parameters of a policy.

The initial state is . The agent selects action in the current state , and the environment then returns the next state and reward . The agent updates policy on the basis of reward . Note that is a state space and that is an action space. The value function is defined as follows,

(1)
(2)

The above equations (1) and (2) show an expected discounted cumulative reward achieved by following policy when an agent is in a state or a state-action. is a discount rate, is a value function and is an action value function. The optimal values are shown as follows.

(3)
(4)

The agent does not know the dynamics of an environment like the state transition function and reward function. The agent needs to estimate them by sampling. Q-learning is a core reinforcement learning method for estimating the optimal action value function [Sutton1998]. The update equation is as follows.

(5)

is the learning rate. The action value function converges to the optimal one under and an infinite number of trials.

Iii-B Potential based Reward Shaping

Potential based reward shaping (PBRS) [Ng+HR:1999, DBLP:journals/corr/abs-1106-5267] is an effective method for keeping an original optimal policy in an environment with an additional reward function. If the potential-based shaping function is formed as

(6)

it is guaranteed that policies in MDP are consistent with those in MDP . is a function with a state as an argument. Wiewiora showed that a reinforcement learner with initial -values based on the potential function make the same updates throughout learning as a learner receiving potential based shaping reward [DBLP:journals/corr/abs-1106-5267]. We assume two learners and . initializes the -value to and updates its policy with an environmental reward and the shaping reward . The -value of is initialized to , and uses only the reward . The -values are disassembled into the initial values and and the updated values and through learning, respectively. If and learn on the same sequence of experiences, is always equal to . Moreover, value-based policies of and

have an identical probability distribution for their next action. Our proposed method uses potential-based shaping rewards to keep an optimal policy in an original MDP.

Iv Subgoal-based Reward Shaping

We propose subgoal-based reward shaping(SRS), which incorporates human knowledge of subgoals into an algorithm via reward shaping. In [Ng+HR:1999], it was mentioned that learning sped up with a learned value function used as the potential function. Since a policy learned with potential-based reward shaping is equivalent to that with -value initialization with the potential function [DBLP:journals/corr/abs-1106-5267], using the learned value function for the potential function is equivalent to initializing the value function with the learned value function. It is impossible to prepare the learned values before learning. Therefore, we consider an approximation of the optimal values by using subgoals without a learned value function. To this end, we define the subgoal first, and we extend the potential-based reward shaping, and then we propose a subgoal-based potential in this section.

Iv-a Subgoal

We define a subgoal as a state if is a goal in one of the sub-tasks decomposed from a task. In the option framework, the subgoal is the goal of a sub-task, and it is expressed as a termination function [Sutton:1999:MSF:319103.319108]. Many studies on the option framework have developed automatic subgoal discovery [Bacon:2017:OA:3298483.3298491]. We aim to incorporate human subgoal knowledge into the reinforcement learning algorithm with less human effort required. The property of a subgoal might be a part of the optimal trajectories because a human should decompose a task to achieve a goal. We acquire a subgoal series and incorporate subgoals into our method in the experiments. The subgoal series is written formally as . is a set of subgoals and a sub-set of . There are two types of subgoal series, totally ordered and partially ordered as shown as Fig. 1.

(a) A totally one.
(b) A partially one.
Fig. 1: Ordered subgoals.

With totally ordered subgoals, a subgoal series is deterministically determined at any subgoal. In contrast, partially ordered subgoals have several transitions to the subgoal series from a subgoal. We used only the totally ordered subgoal series in this paper, but both types of ordered subgoals can be used with our proposed reward shaping. Since an agent needs to achieve a subgoal only once, the transition between subgoals is unidirectional.

Iv-B Potential based Reward Shaping wit State History Guarantees the Policy Invariance

We extended potential-based reward shaping to allow the history of the states an agent visited as an input of a potential function. Informally, if the potential function obtains the history of states in the difference equation, the guarantees of policy invariance remain. Formally,

(7)

where is the history of the states that an agent visited until state , expressed as an equation, . To prove the policy invariance, we first define the return in any arbitrary policy when visiting a state in a discount manner without shaping. Formally,

(8)

Note that is a time step. The Q function can be rewritten as follows:

(9)

Now consider the same policy but with a reward function modified by adding a potential-based reward function of the form given in Equation (7). The return of the shaped policy experiencing the same sequence is,

(10)

where is written. follows from Equation (9) and is expressed as follows.

(11)

Given by Equation (11), does not have action in parameters. As is independent of the action taken, in any given state, the best action remains constant regardless of shaping. Therefore, we can conclude that the guarantee of policy invariance remains.

Iv-C Subgoal-based Potentials

The subgoal-based potential is the main part of our method. We design potentials that utilize subgoals to approximate of the optimal value functions. Generally, the larger the optimal value is, the closer a state is in an environment with only goal reward. We consider subgoal achievement as approaching the goal, and we design a potential that grows every subgoal achievement. Formally, we define subgoal-based potentials as

(12)

where is a hyper parameter, and is a function for calculating the number of achieved subgoals in a visited state sequence of from the last achieved subgoal. An output sequence of the potential function is shown in Fig. 2.

Fig. 2: Potential function for subgoals.

Fig. 2 shows the sequence of an agent achieving three subgoals. When the agent achieves , the potential function generates . Upon the achievement of , this value becomes larger than because increases.

Iv-D Whole Algorithm

The pseudocode is shown in Algorithm 1:

1:
2:
3:repeat
4:     Choose according to
5:     Take in , observe ,
6:     
7:     Append into
8:     
9:     
10:     Update value function and policy with
11:     
12:until termination.
Algorithm 1 Subgoal potential-based reward shaping.

Though we consider totally ordered subgoals in the experiments, extending the algorithm with a sequence of partially ordered subgoals is easy. We modify only the function  for partially ordered subgoals. After an agent takes an action and receives a state from an environment, add the state to the history of states . The calculation of involves , and . The function is used for computing .

V User Study: Human Subgoal Acquisition

In this section, we explain a user study done to acquire human knowledge of subgoals. We used navigation tasks in two domains, four-rooms and pinball, that express the state space spatially. Additionally, we conducted a pick and place task, which is a basic robotics task and one for which humans can have difficulty controlling the robot.

V-a Navigation Task

An online user study done to acquire human subgoal knowledge using a web-based GUI in Fig. 3 was conducted.

Fig. 3: UI for acquiring subgoals.

We recruited 10 participants who consisted of half graduate students in the department of computer science and half others(6 males and 4 females, ages 23 to 60, average of 36.4). We confirmed they did not have expertise on subgoals in the two domains. Participants were given the same instructions as follows for the two domains, and they were then asked to designate their two subgoals both for the four-rooms and pinball domains in this fixed order. The number of subgoals was the same as the hallways in the optimal trajectory for the four-rooms domain. The instructions explained to the participants what the subgoals were and how to set them through the GUI shown in Fig. 3. Also, specific explanations of the two task domains were given to the participants. In this experiment, we acquired just two subgoals for learning since they are intuitively considered easy to give on the basis of the structure of the problems. We considered the two subgoals to be totally ordered ones.

V-B Pick and Place Task

A user study for the pick and place task was also done online. Since it was difficult to acquire human subgoal knowledge with a GUI, we used a descriptive answer-type form. We assumed that humans use subgoals when they teach behavior in a verbal fashion. They state not how to move but what to achieve in the middle of behavior. The results of this paper minorly support this assumption. We recruited five participants who were amateurs in the field of computer science(3 males and 2 females, ages 23 to 61, average of 38.4). The participants read the instructions and then typed the answer in a web form. The instructions consisted of a description of the pick and place task, a movie of manipulator failures, a glossary, and a question on how a human can teach successful behavior. The question included the sentence “Please teach behavior like you would teach your child.” This is because some participants answered that they did not know how to teach the robot in a preliminary experiment. We imposed no limit on the number of subgoals.

Vi Experiments for Evaluation

As mentioned above, we conducted experiments to evaluate our proposed method in three domains: four-rooms, pinball, and pick and place. The four-rooms and pinball domains are navigation tasks in which an agent travels to its goal with the shortest number of steps. The pick and place domain is a robotics task. A robot aims to pick up an object and bring it to a target space. Fig. 4 shows images of the three domains.

(a) Four-rooms domain.
(b) Pinball domain.
(c) Pick and place domain.
Fig. 4: Three domains.

We compared human subgoal-based reward shaping (HSRS) with three other methods. They were a baseline RL algorithm, random subgoal potential-based reward shaping (RSRS), and a naive subgoal reward method (NRS). A separate baseline RL algorithm was implemented for each domain. This is to show that our method can be adapted to various RL algorithms. The different baseline RL algorithms are used each domain to show the availability for several RL algorithms. The reward shaping method, HSRS, RSRS, and NRS were implemented in the baseline algorithm. HSRS used the participants’ subgoals described in Section V.

Each domain had 20 subgoals the same as the number of participants’ subgoals. For pick and place, we generated two random observations with ranges as subgoals. The subgoals of pinball and pick and place had these ranges because both domains had a continuous state space. In NRS, the function in potential-based reward shaping generates a scalar value just when an agent visits a subgoal state. The potential function is written formally as follows.

(13)

The difference from our method is that the positive potential is only for subgoals. We evaluated the learning efficiency with the time to threshold and the asymptotic performance [Taylor:2009:TLR:1577069.1755839] in terms of human subgoal knowledge effects. The time to threshold was the number of episodes to get below a predefined number of threshold steps. The asymptotic performance was the final performance of learning. All the experiments were conducted with a PC [Ryzen 9 5950X, 16 cores(3.4GHz), 128 GB of memory].

Vi-a Navigation in Four-rooms Domain

The four-rooms domain had four rooms, and the rooms were connected by four hallways. The four-rooms domain is a common RL task. In this experiment, learning consisted of 1000 episodes. An episode was a trial run until an agent reached a goal state successfully or when 1000 state actions ended in failure. A state was expressed as a scalar value labeled through all states. An agent could select an action from up, down, left, and right. The agent’s action failed with a probability of 0.33, and another random action was then performed. A reward of +1 was generated when the agent reached a goal state. The start state and goal state were placed at a fixed position. The agent repeated learning 100 times.

Vi-A1 Algorithm Setup

SARSA is a basic RL algorithm. We used SARSA as a baseline RL algorithm. The value function was in table form. The policy was soft-max. We set the learning rate to 0.01 and the discount rate to 0.99. The function counted the number of subgoals in a sequence in order from the front of a state sequence . We set to 0.01 for HSRS, RSRS and NRS after grid search was performed on grids of 0.01, 0.1, 1, 10, and 100.

Vi-A2 Subgoals

Fig. 5 shows the subgoal distribution acquired from the participants and the random subgoal distribution in the four-rooms domain.

(a) Participants’ one.
(b) Random one.
Fig. 5: Subgoal distributions of a four-rooms domain.

The numbers in the cells stand for the frequencies of the subgoals in Fig. 5; higher values are in yellow and lower values are in orange. The number in a cell was the frequency at which the participants selected that cell as a subgoal. These subgoals are totally ordered subgoals. As shown in Fig. 4(a), participants tended to set the subgoals in the hallways. In Fig. 4(a), the color of the start point, the goal, and subgoals are red, blue, and green, respectively. HSRS and NRS used the subgoals as shown in Fig. 4(a), and RSRS used as shown in Fig. 4(b).

Vi-A3 Results

We show the results of the learning experiment. Fig. 6 shows the learning curves of our proposed method and the three other methods.

Fig. 6: Learning curves compared with three methods.

We plotted HSRS with an average totaling 1000 learnings over all participants. RSRS was also averaged by 1000 learnings over all participants. SARSA was averaged by 100 learnings because it does not have the patterns of subgoals. NRS had almost the same conditions as HSRS. HSRS seemed to have the fewest steps for almost all episodes. The performance of RSRS was close to HSRS, but worse than HSRS. NRS had the worst performance. This shows the difficulty of transformation from subgoals into an additional reward function. We also performed an ANOVA for the time to reach the threshold among HSRS, RSRS, SARSA, and NRS. The Holm-Bonferroni method was used as a sub effect test. We set the thresholds to 500, 300, 100, and 50 steps. Table I

shows the mean episodes and the standard deviations of the compared methods for each threshold step.

Thres. HSRS RSRS SARSA NRS
500 2.17(1.54) 3.12(2.43) 3.2(2.61) 6.45(45.1)
300 4.27(3.07) 5.48(3.73) 5.69(3.67) 35.3(156)
100 18.8(9.42) 18.7(10.1) 27.7(11.9) 175(344)
50 35.9(11.7) 37.7(12.4) 53.9(17.8) 376(444)
TABLE I: Mean and standard deviation:Mean(S.D.).

Table II shows the results of the ANOVA and the sub effect tests for HSRS, RSRS, SARSA and NRS.

Thres. HSRS<NRS RSRS<NRS SARSA<NRS otherwise
500 * * * n.s.
300 * * * n.s.
100 * * * n.s.
50 * * * n.s.
TABLE II: Summary of ANOVA and sub-effect tests for time(episode) to threshold steps.

As shown in TableI and TableII, HSRS, RSRS, and SARSA were significantly lower than NRS but did not have significant differences between the three of them. From Table I, HSRS shortened the number of learnings by up to 18 episodes from SARSA. Table VI shows the asymptotic performances of all four methods.

Ind. HSRS RSRS SARSA NRS
Mean 19.1 19.1 24.9 552
S.D. 0.40 0.41 13.5 455
TABLE III: Asymptotic performances in four-rooms domain. Ind. is indicator, and S.D. is standard deviation.

There were no significant differences among the four methods in terms of asymptotic performance.

Vi-B Navigation in Pinball Domain

The navigation task in the pinball domain involves moving a ball to a designated target by giving it velocity. The pinball domain makes it difficult for humans to control the ball because it needs them to control the ball with delicate actions. Delicate control is often necessary in the control domain. It is easier to give a subgoal than trajectories in this domain.

The difference with the four-rooms domain is the continuous state space over the position and velocity of the ball on the x-y plane. An action space has five discrete actions, four types of force and no force. In this domain, a drag coefficient of 0.995 effectively stops ball movements after a finite number of steps when the no-force action is chosen repeatedly; collisions with obstacles are elastic. The four types of force are up, down, right, left on a plane. An episode terminates with a reward of +10,000 when the agent reaches the goal. Interruption of any episode occurs when the episode takes more than 10000 steps. Learnings of the methods are repeated 100 times from scratch, and we used the results to evaluate the learning efficiency.

Vi-B1 Algorithm Setup

We used the actor-critic algorithm(AC) [NIPS1999_1786] as the baseline RL algorithm. AC is a basic RL algorithm that directly optimizes a policy with parameters using the value function. A policy with parameters is called the actor part, and a value function is the critic part. Both parts needed to update parameters. We used AC as the critic with linear function approximation over a Fourier basis [Konidaris11a] of order 3. A subgoal had only a center position and a radius in this domain. The radius was the same as that of the target. We assumed that a subgoal was achieved when the ball entered a circle with the center position and radius at any velocity. We used a soft-max policy as the actor. The learning rates were set to 0.01 for both the actor and the critic. The discount factor was set to 0.99. The parameters for HSRS, RSRS, NRS, and were set to 100 after a grid search on grids of 10, 100, 1,000, and 10,000. The function was almost the same as in the four-rooms domain excluding the judgement of whether a state is a subgoal. We compared HSRS with the three other methods in terms of learning efficiency with the time to threshold. We defined this threshold in this domain as the number of episodes required to reach the steps.

Vi-B2 Subgoals

Fig. 7 shows the subgoal distribution acquired from the participants and the random subgoal distribution in the pinball domain.

(a) Participants’ one.
(b) Random one.
Fig. 7: Subgoal distributions of a pinball domain.

The participants focused on four regions of branch points to set subgoals in comparison with random subgoals from Fig. 6(a) and Fig. 6(b). HSRS and NRS used the subgoals as shown in Fig. 6(a), and RSRS used those as shown in Fig. 6(b).

Vi-B3 Results

Fig. 8 shows the learning curves. It took an average shift of 10 episodes.

Fig. 8: Learning curve in pinball domain.

HSRS mostly performed the best of the three methods. The performance of RSRS was close to HSRS, but worse than HSRS. We evaluated the learning efficiency by using the time to threshold. We used each learning result smoothed using the simple moving average method with the number of time periods being 10 episodes, and performed a Student’s t-test to confirm the difference among the three methods. Table 

IV shows the mean and the standard deviations, and Table V summarizes the results for the time to threshold, where Thres. is the number of steps.

Thres. HSRS RSRS AC NRS
3000 15.0 (32.7) 23.8 (45.7) 51.7 (67.1) 39.8 (50.2)
2000 28.0 (38.8) 33.2 (46.8) 65.4 (67.6) 59.0 (58.6)
1000 63.7 (60.0) 72.4 (72.8) 101 (71.3) 92.0 (69.2)
500 140 (67.0) 141 (14.5) 163 (54.8) 158 (61.0)
TABLE IV: Mean and standard deviation: Mean(SD).
Thres. {HSRS, RSRS}<AC HSRS<NRS RSRS<NRS o.w.
3000 * * * n.s.
2000 * * * n.s.
1000 * * n.s. n.s.
500 n.s. n.s. n.s. n.s.
TABLE V: Summary of ANOVA and sub effect tests for time(episode) to threshold steps. Otherwise is abbreviated to o.w., and n.s. means not significant.

From Table V, HSRS and RSRS reached steps of 3000, 2000, and 1000 less than AC, which was significant. There was no significant difference between HSRS and RSRS for any of the thresholds. Table VI shows the asymptotic performances of all four methods.

Ind. HSRS RSRS AC NRS
Mean 3008 3629 3625 3672
S.D. 2678 2925 3014 3042
TABLE VI: Asymptotic performances in pinball domain. Ind. is indicator, and S.D. is standard deviation.

There were no significant differences among the four methods in terms of asymptotic performance. Our method with random and human subgoals lead to more efficient learning than the basic RL algorithm in the pinball domain.

Vi-C Pick and Place

We used a fetch environment based on the 7-DoF Fetch robotics arm of OpenAI Gym[1606.01540]. In pick and place, the robotic arm learns to grasp a box and move it to a target position[DBLP:journals/corr/abs-1802-09464]. We converted the original task into a single-goal reinforcement learning framework because potential-based reward shaping is not developed to deal with the multiple goals [Ng+HR:1999]. The dimension of observation is larger than the previous navigation tasks, and the action is continuous. An observation is 25-dimensional, and it includes the Cartesian position of the gripper and the object as well as the object’s position relative to the gripper. The reward function generates a reward of -1 every step and a reward of 0 when the task is successful. In [DBLP:journals/corr/abs-1802-09464], the task is written about in detail.

Vi-C1 Algorithm Setup

We compared HSRS with RSRS, NRS, and DDPG [journals/corr/LillicrapHPHETS15]

in terms of learning efficiency with the time to threshold and asymptotic performance. HSRS, RSRS, and NRS used DDPG as a base. We defined this threshold in the task as the number of epochs required to reach the designated success rate. The asymptotic performance was the average success rate between 190 and 200 epochs. Ten workers stored episodes and calculated the gradient simultaneously in an epoch. HSRS and NRS used ordered subgoals provided by five participants. RSRS used subgoals randomly generated. The learning in 200 epochs took several hours. We used an OpenAI Baselines

[baselines] implementation for DDPG with default hyper-parameter settings. We built the hidden and output layers of the value network over abstract states with the same structure as the Q-value network. We excluded the action from the input layer. The input of the network was only the observation of subgoal achievement, and the network learned from the discount accumulated reward until subgoal achievement. A subgoal was defined from the information in the observation. We set the margin to to loosen the severe condition to achieve subgoals. was set as 1 for HSRS, RSRS, and NRS. The learning repeated 10 times, and the results were averaged over 10 learnings.

Vi-C2 Subgoals

All five participants determined the subgoal series, the first subgoal was reaching the location available to grasp the object, and the second subgoal was grasping the object. Fig. 9 shows the example of the subgoal series in pick and place domain.

(a) First subgoal.
(b) Second subgoal.
Fig. 9: Example of subgoals from participant in a pick and place domain.

Vi-C3 Results

We used the subgoal series for the input of our method. Fig. 10 shows the learning curves of HSRS, RSRS, NRS, and DDPG.

Fig. 10: Learning curves in pick and place task.

As shown as Fig. 10

, the results were averaged across five learnings, and the shaded areas represent one standard error. The random seeds and the locations of the goal and object were varied every learning. HSRS worked more effectively than DDPG, especially after about the 75th epoch. NRS had the worst performance through almost all epochs.

Table VII shows the mean episodes and the standard deviations of the compared methods for each threshold steps.

Thres. HSRS RSRS DDPG NRS
0.2 26.3 (22.3) 27.1 (6.77) 28.2 (26.4) 34.3 (18.7)
0.4 42.7 (27.2) 87.0 (49.6) 56.6 (37.5) 75.9 (43.5)
0.6 75.1 (56.7) 175 (46.4) 110 (63.7) 189 (26.2)
0.8 156 (57.6) 185 (35.7) 187 (22.9) 200 (0)
TABLE VII: Mean and standard deviation in pick and place:Mean(S.D.).

Table VIII shows the asymptotic performances of all four methods.

Ind. HSRS RSRS DDPG NRS
Mean 0.73 0.53 0.63 0.51
S.D. 0.16 0.17 0.13 0.06
TABLE VIII: Asymptotic performances in pick and place. Ind. is indicator, and S.D. is standard deviation.

HSRS was significantly different from RSRS and NRS at 0.6 in terms of the time to threshold. For the rest, there were no differences in both the time to threshold and the asymptotic performance, but HSRS was the best compared with the other three methods.

In this section, we evaluated our method for four-rooms, pinball, and pick and place. In all domains, HSRS performed better than the other three methods in terms of the time to threshold and the asymptotic performance. There was, in particular, a significant difference between HSRS and AC as the baseline RL algorithm for pinball. The performance of RSRS was close to HSRS in the four-rooms and pinball domains. In contrast, RSRS performed the second worst of all for pick and place. The results showed the subgoal-based reward shaping with human knowledge of subgoal improved the learning efficiency from the baseline RL algorithm, and the human knowledge was more useful for SRS than random. The naive reward shaping made the learning efficiency worse than the baseline RL algorithm, so it is not easy for designers to incorporates the knowledge of subgoal over PBRS. This is why we proposed SRS, and SRS could improve the learning efficiency just by providing the subgoal series.

Vii Discussion

Vii-a Analyzing Performances

There was a small difference between HSRS and RSRS in the four-rooms domain from Fig. 6 and from Fig. 8. The difference between HSRS and RSRS was large in the pick and place domain. Approximately 65% of states generated randomly were in an optimal trajectory for the four-rooms domain. The pinball task seemed to have approximately 20% of states generated randomly in the optimal trajectory. There is no state generated randomly in an optimal trajectory. The random subgoals were better in the four-rooms and pinball domain than in the pick and place task. This is because the four-rooms and pinball domain might have more states in an optimal trajectory than the pick and place task. We think that the smaller difference was caused by the four-rooms and pinball domain having most of the states in an optimal trajectory.

Potential-based reward shaping keeps a policy invariant from the transformation of a reward function. From the experimental results in the pinball domain, the asymptotic performances of HSRS were statistically significantly lower than RSRS. There was no significant difference in the four-rooms domain. As shown in Fig. 6, the performance was clearly asymptotic at the 121th of 1000 episodes. The learning was clearly asymptotic as the 200th episode for the pinball domain in Fig. 8. Since our proposed method is based on PBRS, RSRS converges to the same performance as HSRS if learning continues.

Vii-B Hyper Parameters

Fig. 11 shows the results of grid searches. In the searches, we used only a single participant’s subgoals which were randomly selected, and the number of learnings was a half that of the experimental results.

(a) Four-rooms domain.
(b) Pinball domain.
Fig. 11: Results of grid searches.

Seeing from this figure, our method was significantly sensitive to the hyper parameter . A wrong such as in the four-rooms domain could deteriorate the performance of the method. This may incur a cost for tuning hyper parameters. To solve this problem, we must fit to optimal value. The environmental rewards of goals were 1 and 10,000 for four-rooms and pinball, respectively. The best was 0.01 for four-rooms and 100 for pinball. Both were one-hundredth of the environmental rewards.

Vii-C Teaching Knowledge of Subgoals

The limitation of subgoals is that no established method to provide them, and it may depend on individual intuitive consideration. For the four-rooms domain, as shown in Fig. 5, almost all the subgoals were located within the right-top and right-bottom rooms. From this, we think that many participants tended to consider the right-down path as the shortest one. This may mean humans abstractly have a common methodology and preference for giving subgoals. Additionally, there was the interesting observation that half of the participants set a subgoal in the hallways.

For the pinball domain, as shown in Fig. 7, the subgoals were scattered at the gap between objects and at the branch points. They seem to be at the change points of actions on an optimal trajectory. We can consider that the repeated actions are composed in a macro-action. This is a similar approach to [10.5555/3298483.3298547].

For the pick and place task, five participants provided the same subgoals. The way to provide subgoals was different from the other tasks. We used the descriptive answer-type form. These results almost show the tendency for people to provide similar subgoals exists.

We consider that people change a strategy for providing subgoals in response to the task properties, especially environment structure. The test for this hypothesis is future work.

There are other limitations and future work regarding this work. We implicitly assumed that a human can set subgoals more easily than providing the optimal trajectories from a start state to a goal state, which is conventional human knowledge that improves efficiency of reinforcement learning with IRL. We need to verify this assumption by conducting experiments with participants to measure the cognitive load in setting various subgoals and trajectories. We consider that the ease of providing subgoals significantly depends on the kind of subgoals. We plan to conduct a large user study to make clear what is the best task for providing subgoals.

Viii Conclusion

In reinforcement learning, learning a policy is time-consuming. We aim for accelerating learning with reward transformation based on human subgoal knowledge. The difficulty of integrating subgoal knowledge into the potentials was a problem in improving learning efficiency. We proposed a method by which a human deals with several characteristic states as a subgoal. We defined a subgoal as the goal state in one of the sub-tasks into which a human decomposes a task. The main part of our method is an approximation of the optimal value function with subgoals. We collected ordered subgoals from participants and used them for evaluation. We evaluated navigation for four-rooms, pinball, and a pick and place task. The experimental results revealed that our method with human subgoals enabled faster learning compared with the baseline method, and human subgoal series were more helpful than random ones.

References