GRP Model for Sensorimotor Learning

03/01/2019 ∙ by Tianyu Li, et al. ∙ Carnegie Mellon University 0

Learning from complex demonstrations is challenging, especially when the demonstration consists of different strategies. A popular approach is to use a deep neural network to perform imitation learning. However, the structure of that deep neural network has to be "deep" enough to capture all possible scenarios. Besides the machine learning issue, how humans learn in the sense of physiology has rarely been addressed and relevant works on spinal cord learning are rarer. In this work, we develop a novel modular learning architecture, the Generator and Responsibility Predictor (GRP) model, which automatically learns the sub-task policies from an unsegmented controller demonstration and learns to switch between the policies. We also introduce a more physiological based neural network architecture. We implemented our GRP model and our proposed neural network to form a model the transfers the swing leg control from the brain to the spinal cord. Our result suggests that by using the GRP model the brain can successfully transfer the target swing leg control to the spinal cord and the resulting model can switch between sub-control policies automatically.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Fig. 1: GRP model. The brain provides the reference control signal for the generators, the difference between the output of the Generator for the th layer is sent to generate the reference signal of the RP for the th layer . The input for our setting is the sensory data. The output of this model is equivalent to the reference control signal .

There are two distinct strategies to learn a complex task from the demonstration. The first one is to use a single deep neural network which has been widely studied in the machine learning field and implemented on computer graphics and robotics. With this approach, a single neural network can learn highly dynamic skills in simulation[1][2][3] and can act as the initial policy for further training that leads to deployment on real robots [4][5]. However, using a single monolithic policy to present a structured activity or cyclic phase structure can be challenging since a single network does not make explicit the sub-structure and encapsulates all contexts[6][7]

. Alternatively, instead of using a monolithic controller that includes everything, a modular strategy which consists of multiple controllers and each one only is responsible for a small portion of the control. This approach has been introduced in the study of supervised learning for a mixture of demonstration data

[8][9]. These works include multiple experts’ network and a classification network to split the input space into which expert are specialize according to the experts’ outputs. A similar idea has been proposed which uses a directed graphical model and latent variables[6].

Although a variety of learning from demonstration works have been proposed, specific models that relate to human learning in physiology level is relatively rare[10], and the majority of these works are focusing on the function of the brain. Gomi and Kawato[11] combined mixture of experts supervised learning with feedback-error learning[12]. Wolpert and Kawato, based on the idea that the brain contains multiple pairs of forward and inverse model, introduced the modular selection and identification for motor control (MOSAIC) model [13]

. Runbot’s study was focusing on learning in the spinal cord level. Runbot investigated in the variation of gains of the neuron as gait changes using Differential Hebbian Learning

[14].

Though learning occurs in the spinal cord has not been studied widely, the spinal cord plays an essential role in legged locomotion tasks[15][16]. Remarkable work on mesencephalic cats and parlayed four leg mammals[17][18] provides direct evidence for verifying that animals can generate adaptive leg behaviors in the absence of brain planning. These observations further suggest that the spinal cord generates part of the leg control in animal legged locomotion.

In this paper, we developed a model describing the transfer of a simple legged locomotion task: swing leg control from the brain to the spinal cord. The GRP model is an online learning model inspired by the previous work on modular architecture. We proposed a new format of the neural network which is more aligned with the biology findings in the interaction between two neurons. Then, our target swing leg controller is introduced. We tested our GRP model by learning the target swing leg control policy. The result shows that our proposed model can learn complicated controllers, and also capable of learning switch between different learned controller. Finally, we discussed our model in the physiological implications of our model and its connection to robot learning.

Ii Models

The feedback error learning[19][20] inspires our learning model. In the previous works, the brain uses feedback error to learn the desired command from the cerebellum. We combined the idea of feedback error learning with the modular model to proposed the GRP model. We used the GRP model to represent the transfers of a swing leg control from the brain to the spinal cord. Moreover, we introduce a new form of neural network structure which uses multiplication as the interaction between two neurons. Then we present our physical swing leg model.

Ii-a Learning Model: GRP Model

To model the transfer of control between the brain and the spinal cord, we introduce an online learning model: Generator and Responsibility Predictor(GRP) model, which is shown in Figure 1. The model is composed of multiple layers that are parallel to each other. Every layer contains a Generator and a Responsibility Predictor(RP). The generators learn from the reference control signal

provided by the brain in our setting. The RP estimates its corresponding Generator ’responsibility’

where k indicates the index of the layer. The responsibility can be interpreted as the weight of the Generator under the following constraint,

RP in layer k is trained with a reference responsibility signal generated by a normalization function. This normalization function takes the difference between the reference control signal and generator output ,

the intuition for the normalization function is to output large values when is small and produce small values when is large. is the regularization term which is a positive value and increases after every learning episode during training, the growth rate is a constant value larger than one. The RP error is then sent backward for training the RP.

The total output of the online learning model is,

this guarantees that while transferring control from the brain to the spinal cord, the human is still performing locomotion in the targeting gaits. The Generator and PR are both neural network structured which will be introduced in the following section. Their inputs in our setting are sensory data, and we will discuss them in Section IV.

Ii-B Neural Network Structure

Instead of using a classic neural network structure, we introduce an alternative neural network structure which enhances the biological plausibility. While class neural network models interaction between two neurons as sum operation, the typical interaction between neurons known as Presynaptic Inhibition can be better represented by multiplication. It is reasonable to assume that when one of the neurons does not sense anything, this neuron will not affect other neurons. Thus, we model the network with the interaction between two networks as the multiplication of the exponential of the input value. The network structure is shown in Figure 2. The input data x first be sent to two tunnels and get its inverse -x in one of the channel. These two signals then pass a threshold function. This process intends to model the positive and negative part of sensory data are sensed separately by two neurons, thus provide two inputs and (one of them is a positive value, while the other must be 0) for neuron network’s main part.

Fig. 2: Neural network structure. The interaction between two neurons is represented as a multiplication. For example, the influence of neuron on is the output of neuron multiplied by , here is the weight matrix of the neural network. When neuron will not affect neuron .

The output of the Generator is

where is the weight of the th input that corresponds to the th input of the th Generator. For RP’s output

, we add a Sigmoid activation function to ensure the predicted responsibility value is between [0, 1].

here, is the weight of the th input to the th input in the th layer of RP.

Ii-C Brain Control Transfer

In the proposed GRP model, the target control is provided by the brain. Refer to Section III for a detailed description of the target control. The error between the reference control and the Generator is sent backward to the Generator for learning. Meanwhile, the RP error

is sent back to RP for the same purpose. The weights of networks are learned via a gradient-based method using the appropriate loss function.

and are the loss functions for the Generator and the RP respectively,

where and are regularization terms, and is a constant. The gradient for each weight can be computed analytically,

here, and are the input data for neuron and .

During learning we use the reference responsibility signals to regulate our learning rate.

here, is the constant learning rate, is the learning rate for the th layer of the Generator at the current time step. This ensures the Generator would not learn when it takes no responsibility.

Ii-D Physical Model: Swing Leg Model

We used a classic double pendulum model as our physic model. The thigh and shank of this model are represented as rods of length and (, ). Point masses and are attached to the middle point of each rod. The inertial properties are based on anthropomorphic data from a human body with the height and weight being 180cm and 80kg respectively[21] (, ). The hip is connected to the origin of the world frame, and the joint angles and are measured as shown in Figure 3. The applied hip and knee torque and are added to hip and knee respectively. The leg angle can be calculated as , the current leg length is calculated as . This model was simulated in Simulink.

Iii Target Swing Leg Controller

The target swing leg controller contains three natural control tasks. Starting from the ground level at the initial configuration (leg angle ), the first task is to flex the leg to at least the clearance length . Second, the control focus shifts to advancing the swing leg to the target angle . And the final task is to extend the leg until ground contact. Although a conventional state feedback controller can execute this sequence of control, this controller takes advantage of the passive dynamics that the swing leg provides to learn the required torques. Moreover, this controller separates the control of the hip and the knee as much as possible. As a consequence, the controller is structured around the functionally distinct hip and knee joint controllers. Overall, this swing leg control is composed of one hip control policy and three knee control policies.

(i)

(iii)

(ii)

Fig. 3: Swing leg model. Phase 1: Contract the leg to pass the clearance length (P). Phase 2: Swing the leg to the target leg angle (Q) while holding the leg. Phase 3: Extend the leg until it reaches the ground. The right figure shows the definition of hip angle and knee angle .

Iii-a Hip Control

The primary task of the hip controller is to move the leg to the target leg angle . The hip torque is given as:

Beside angle control, the hip controller receives an additional term :

from the knee controller during the leg extension phase. The purpose for will be discussed in the following section.

Iii-B Knee Control

The primary purpose of the knee controller is to regulate the leg length. As mentioned above, the controller separates knee control into three natural control tasks. Each task is assigned with an individual control policy. A detailed analysis of this controller is presented in [22].

Iii-B1 Phase 1

The first control task is to flexing the leg in passing a minimum clearance . The dynamics shows that while the Coriolis, centrifugal and gravitational terms always tend to extend the knee, negative hip acceleration tends to flex the knee. If the negative hip acceleration passes a threshold, no torque is required to flex the knee. Otherwise, we add an adaptive flexion control.

(1)

Iii-B2 Phase 2

Once the leg length has shorten pass the clearance length , the knee controller then is tasked with holding the knee, and the leg angle is only controlled by the aforementioned hip control. This task is realized when the knee flexes and modulated when the knee extends,

(2)

Iii-B3 Phase 3

Once the leg passes the threshold , the primary objective for the knee control switch to stopping the swing and extending leg to hit the ground. This is achieved by using two functional components. The first component generates a stopping knee-flexion torque inspired by nonlinear contact models.

(3)

The stopping torque works well only if the coupling with the hip motion is canceled. Thus, we apply an a compensation torque on the hip control.

The second functional component activates when the leg has slowed down to , a knee extension torque is added to the knee to land the leg to the ground.

One thing that needs to be mention is that the last component does not necessarily needs to be activated. The swing of the leg might be terminated before the last component got activated.

Iv Experiment and Results

To exterminate the validity of our method for transferring swing leg control from the brain to the spinal cord, we used our model to learn from different trajectories generated from [22]. We tested with different numbers of Generator/Responsibility Predictor to verify that our structure is able to learn complex tasks. The inputs of the Generator and Responsibility Predictor are all sensory data, in our setting, we choose 5 variables as our system’s input: . Since our network is sensing positive and negative values separately (see Figure 2), the inputs to the network are 8 variables: ( and can never be negative value, thus no and ).

Iv-a Target Control Parameters

We used 40 different trajectories generated by a target controller for learning. The initial leg configuration is set to be the same for all trajectories: and (). While the target leg angle is from to which includes the typical human landing leg angles. Initial hip velocity is set to be from -4 to 0 while the initial knee velocity is from -7 to -1 . The clearance leg length is . The control gains are manually tuned, all the parameters are listed in TABLE I.

parameter value parameter value
110 23
8.5 4
250 10
200
TABLE I: Control Parameters Values

Iv-B Transfer of Control in the Neural System

The total output of the neural network is defined as model’s output when removing the reference control signal and reference responsibility, and it can be calculated as,

here, is the index of the neural network. We modeled the transfer of hip control and knee control separately.

Iv-B1 Hip Control

With used a pair of networks to train for the hip control initially. As shown in Figure 4, using a single network, the prediction (blue) overlaps with the target control output (red), we can conclude that the system learned the target control and this indicates the control transferred to the spinal cord. The resulting RP is equal to 1 during the control that is because, in this setting, there is only one pair of networks activated, the corresponding Generator should always be in charge of the command, which means at any time the responsibility should output . We also tested using three pairs of networks to model hip control transfer. After training, the total output fits the target well and slightly better than the previous setting. Although there are three paralleled Generator/RPs the resulting control only activated 2 of them in the test swing while one of them remained silent. We will discuss this formally in the knee control section.

Fig. 4: Result of learning the hip control using a single pair of networks (Generator and RP) and 3 pairs of networks. The left graph shows the total output. The right graph shows the predicted responsibility, each color represents a different Generator.

Iv-B2 Knee Control

For the target knee control, it consists of three distinct control objectives and each objective is formed by an individual control strategy. Thus, we can conclude that the target hip control is harder to learn. We started with using a single pair of networks, from the result (Figure 5) we can see that due to the complexity of the target control, the final result cannot match the target entirely. Then, we used multiple pairs of networks to learn the target control. As the number of networks increased, the prediction can fit the target control output better. In the three pairs of networks setting, the GRP model learned three controllers and learned to switch between them. When using five and seven pairs of networks, in Figure 5, the GRP in both of the settings found 4 primary controllers activated during the test. According to the resulting figure, the last two settings, even though they have five and seven Generators respectively, they only enabled four of them. Moreover, these two settings learned to switch controller at the same time steps. Combining the results of hip learning and knee learning, we can conclude that the GRP model can select the number of network pairs automatically instead of activating all the layers.

Fig. 5: Result of learning the knee control using 1, 3, 5, 7 pairs of networks (Generator and RP). The left graph shows the total output. The right graph shows the predicted responsibility, each color represents a different Generator.

Iv-C Identified Controller

Looking into the learned weights distribution of networks, we can find a controller that explainable. Extracting the weights of the first responsible Generator of knee control in the multiple layers setting, we get weights distribution Figure 6. From the weights distribution, we can identify a ’passive’ Generator since those weights are close to 0 which means given any inputs the output of this generator close to 0.

Fig. 6: The indices 1-8 represents respectively. Different colors represents different source neurons. The -axis represents the source neuron, the -axis represents the target neuron and the -axis represents the weight value.

Iv-D Resulting Spinal Cord Control Performance

We tested the overall performance of the resulting model by removing all the reference signals. We used the resulting single pair network for hip control and 3 pairs of networks for knee control. The performance is defined by the average error and the maximum error. The error here is the absolute value of which is the distance between the target leg angle and the actual final leg angle . We sampled 20 trajectories using the same range of initial condition and target angle as the training data, with an average of error and a maximum error of (the target controller has an average error of and a maximum error of ).

V Discussion

V-a Transfer of Control in the Neural System

Our result suggests a framework where the neural system could be used to transfer the control from brain to spinal cord level. However, some points need to be clearly stated.

Firstly, the model assumes the learning occurs in human which requires the neurons to be able to compute the error and propagate the error back to the reflex synapses for learning. While there is evidence showing the existence of back-propagation within neurons, there is no prove showing that every neuron has such ability. If the neurons connected between brain and the spinal cord is not able to do back-propagation our approach would be physiological implausible.

The second point is that, although we enhance our physiological implausibility by creating a norm form of neural network structure, this structure still need improvement. During sensing, neurons have threshold to filter the low sensing data. This effect can be represented by adding a bias in our current structure. The bias can also be comprised in our learning process as variables.

Moreover, although our network considers the relationship between two neurons as multiplication in order to model the Presynaptic Inhibition which is commonly found in neuron systems, our swing leg model is a simple double pendulum which can extend to a muscle-skeletal model[23][24]. Specifically, the knee and ankle torques are generated by related Hill-type muscles, e.g. gastrocnemius. Each muscle produces a forces as a function of the muscle’s current stimulation, the muscle length and the muscle velocity [25]. By investigating the network relationship between each muscle’s stimulation and positive force/length/velocity feedback from different muscles, we might be able to understand how this adaptation shapes the controller structure in the muscle level.

V-B Learning Framework

Beside the biological plausibility, the structure, and its learning algorithm, although learned the target controller in our setting and can learn to switch control between sub-controls, there are a few points that are worth to be addressed which might inspire new ideas in the robot learning community.

First of all, by increasing the number of pairs of networks, the capability of the system will increase correspondingly, which indicates the ability to learn any control using demonstrations. Plus, there is no doubt that the network structure in our setting can be replaced by other forms like multi-layer perceptions (MLP). For a more complex task which composes a series of sub-tasks, our learning algorithm can distinguish different sub-tasks, this shows the potential of this method in AI reasoning field, as a comparison, current state-of-the-art techniques, using deep neural networks to fit the target control suffers from low explainability. These identified different controllers can be stored in a ’skill library’; those stored controllers can be used separately to achieving complex tasks that consist of several learned sub-tasks. Besides, each controller can be optimized independently using individual well-designed cost functions.

Besides that, our method is based on the assumption that each generator network is initialized with different weights if two sets of weights are equal, then at each step since the error between the output and the target are identical, they will have the same update, which leads to having the weights. Besides the situation as mentioned earlier, in a more general case, where two sets of weight are similar, this would lead to a slow learning process. In other words, the initialization weights are crucial to our algorithm. Since in our setting, we use a simple gradient descent method to optimize the error between prediction and target, in the MOSAIC model, a similar issue has been alleviated using the EM algorithm and hidden Markov model (HMM) based learning

[26], thus, we can try to ease this issue by replacing gradient descent with other optimization methods.

Vi Conclusion

We proposed a novel neural network architecture that uses multiplication as the primary relationship between neurons to imitate the Presynaptic Inhibition effect in neural systems. To use multiple simple neural network structure to learn from an unsegmented complex control policy, we used a predefined multi-phases swing leg control. We introduced a modulate model which is composed by several groups of generator and responsibility predictors. The generator predicts the current output and the responsibility predictor generates the weight of its corresponding generator. Using this model, we modeled the transfer of a swing leg control from the brain to the spinal cord level. We demonstrated that this model can learn when to switch on/off the generator and is able to automatically select the number of Generator it uses. We discussed our work in the context of both physiology and robot learning. For physiology, we can further extend this work by using more physical models like the muscle model which might ignite new physiology findings. For robot learning, we show the potential of this model in AI reasoning. We also discussed some improvements that can be implemented to our model.

References

  • [1]

    X. B. Peng, A. Kanazawa, J. Malik, P. Abbeel, and S. Levine, “Sfv: Reinforcement learning of physical skills from videos,” in

    SIGGRAPH Asia 2018 Technical Papers.   ACM, 2018, p. 178.
  • [2] X. B. Peng, P. Abbeel, S. Levine, and M. van de Panne, “Deepmimic: Example-guided deep reinforcement learning of physics-based character skills,” ACM Transactions on Graphics (TOG), vol. 37, no. 4, p. 143, 2018.
  • [3] J. Merel, Y. Tassa, S. Srinivasan, J. Lemmon, Z. Wang, G. Wayne, and N. Heess, “Learning human behaviors from motion capture by adversarial imitation,” arXiv preprint arXiv:1707.02201, 2017.
  • [4] T. Li, A. Rai, H. Geyer, and C. G. Atkeson, “Using deep reinforcement learning to learn high-level policies on the atrias biped,” arXiv preprint arXiv:1809.10811, 2018.
  • [5] H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V. Kumar, “Dexterous manipulation with deep reinforcement learning: Efficient, general, and low-cost,” arXiv preprint arXiv:1810.06045, 2018.
  • [6] A. Sharma, M. Sharma, N. Rhinehart, and K. M. Kitani, “Directed-info gail: Learning hierarchical policies from unsegmented demonstrations using directed information,” arXiv preprint arXiv:1810.01266, 2018.
  • [7] D. Holden, T. Komura, and J. Saito, “Phase-functioned neural networks for character control,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, p. 42, 2017.
  • [8] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, G. E. Hinton et al., “Adaptive mixtures of local experts.” Neural computation, vol. 3, no. 1, pp. 79–87, 1991.
  • [9] M. I. Jordan and R. A. Jacobs, “Hierarchies of adaptive experts,” in Advances in neural information processing systems, 1992, pp. 985–992.
  • [10] A. M. Haith and J. W. Krakauer, “Theoretical models of motor control and motor learning,” Routledge handbook of motor control and motor learning, pp. 1–28, 2013.
  • [11] H. Gomi and M. Kawato, “Recognition of manipulated objects by motor learning with modular architecture networks,” Neural networks, vol. 6, no. 4, pp. 485–497, 1993.
  • [12] M. Kawato, K. Furukawa, and R. Suzuki, “A hierarchical neural-network model for control and learning of voluntary movement,” Biological cybernetics, vol. 57, no. 3, pp. 169–185, 1987.
  • [13] D. M. Wolpert and M. Kawato, “Multiple paired forward and inverse models for motor control,” Neural networks, vol. 11, no. 7-8, pp. 1317–1329, 1998.
  • [14] J. R. Wolpaw, “What can the spinal cord teach us about learning and memory?” The Neuroscientist, vol. 16, no. 5, pp. 532–549, 2010.
  • [15] G. Courtine, Y. Gerasimenko, R. Van Den Brand, A. Yew, P. Musienko, H. Zhong, B. Song, Y. Ao, R. M. Ichiyama, I. Lavrov et al., “Transformation of nonfunctional spinal circuits into functional states after the loss of brain input,” Nature neuroscience, vol. 12, no. 10, p. 1333, 2009.
  • [16] A. J. Ijspeert, A. Crespi, D. Ryczko, and J.-M. Cabelguen, “From swimming to walking with a salamander robot driven by a spinal cord model,” science, vol. 315, no. 5817, pp. 1416–1420, 2007.
  • [17] P. A. Guertin, “The mammalian central pattern generator for locomotion,” Brain research reviews, vol. 62, no. 1, pp. 45–56, 2009.
  • [18] A. J. Ijspeert, “Central pattern generators for locomotion control in animals and robots: a review,” Neural networks, vol. 21, no. 4, pp. 642–653, 2008.
  • [19] J. Nakanishi and S. Schaal, “Feedback error learning and nonlinear adaptive control,” Neural Networks, vol. 17, no. 10, pp. 1453–1465, 2004.
  • [20] M. Kawato, “Feedback-error-learning neural network for supervised motor learning,” in Advanced neural computers.   Elsevier, 1990, pp. 365–372.
  • [21] D. A. Winter, Biomechanics and motor control of human movement.   John Wiley & Sons, 2009.
  • [22] R. Desai and H. Geyer, “Robust swing leg placement under large disturbances,” in Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on.   IEEE, 2012, pp. 265–270.
  • [23] S. Song, R. Desai, and H. Geyer, “Integration of an adaptive swing control into a neuromuscular human walking model,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).   IEEE, 2013, pp. 4915–4918.
  • [24] R. Desai and H. Geyer, “Muscle-reflex control of robust swing leg placement,” in 2013 IEEE international conference on robotics and automation.   IEEE, 2013, pp. 2169–2174.
  • [25] H. Geyer, A. Seyfarth, and R. Blickhan, “Positive force feedback in bouncing gaits?” Proceedings of the Royal Society of London. Series B: Biological Sciences, vol. 270, no. 1529, pp. 2173–2183, 2003.
  • [26] M. Haruno, D. M. Wolpert, and M. Kawato, “Mosaic model for sensorimotor learning and control,” Neural computation, vol. 13, no. 10, pp. 2201–2220, 2001.