The ability to accurately localise sound source is critical for human and other animals. Newborn human infants orient to sounds on the left/right within hours after birth(Muir and Field, 1979; Litovsky, 2012), but this response is neither accurate nor reliable. A more reliable and precise auditory space map(Makous and Middlebrooks, 1990) can be learned with auditory experience during development or in response to altered auditory cues such as a modified pinna(Hofman et al., 1998).
In this work, we consider the theoretical mechanism of learning accurate sound source localisation. Rather than relying on explicit supervision signals such as visual feedback, we show that it is possible to learn an accurate map with only unreliable and sparse supervisions. This is because explicit feedback used in common supervised learning models may not always be necessary. For example, early-blind human subjects may have better sound source localisation ability(Lessard et al., 1998) than people with visual feedback.
One important assumption is that infants can interact with the environment by orienting to the estimated sound source location. Another assumption is that although the auditory orienting responses of infants are not accurate about precise locations, they are more accurate about the left/right difference. Previous studies(Muir and Field, 1979; Field et al., 1980) show that newborns and 1 month olds turn toward the sound source 80% of the time.
In this study, on one hand, we try to emphasize the general learning mechanism by simplifying many details in localisation. For example, we discuss only interaural level difference at a single sound frequency, ignoring other possible auditory cues. On the other hand, we limit our approaches with biological concerns. For example, we try to avoid the requirements of large size of exact history information storage during learning. Although these kinds of storage are common in machine learning algorithms dedicated to digital computers, it is not likely that they can be carried out with biologically plausible neuron models.
This paper is organized as following. In Section.2 we describe the background settings for a simplified learning problem. In Section.3.1 we present a Teacher model that is used to generate unreliable feedbacks similar with the auditory orienting response of infants. We then use this Teacher model to facilitate the learning process of more complex models in subsequent sections. In Section.3.2, we show a robust learning model which can learn continuous auditory space map with only left/right feedback from the Teacher. In Section.3.3, we combine environment reward with a Teacher model to learn a more accurate map. Discussions of experiment results, related and future work can be found in the end.
The head casts an acoustic shadow when we hear. Therefore, if a single sound source locates at one’s right hand side, the sound heard by the right ear will be louder than the sound heard by the left ear. This sound level difference between two ears is called interaural level difference(ILD) or interaural intensity difference(IID). Different sound source locations and frequencies will produce different ILD. Our brain is able to localise sound source by mapping the ILD cue to the sound source direction. In this work, we study the learning process of this mapping in the following scenario(Figure 1).
On the azimuth plane, a human-like agent that has two ears on the left and right side of its head sits at the original point of a polar coordinate system. The polar axis directs to where the agent is facing to. At each time step , a single sound source located at produces a pure tone with a fixed frequency . is limited in
. An heuristic function based on measurements on human subjects(Van Opstal, 2016) is used to describe the value of ILD:
where ILD is measured in , in , in degree. The agent can localise the sound source based on the ILD cue and orient to the estimated direction . The whole time step finishes after this orientation action and the polar coordinate system is reset to the new facing direction of the agent in the end.
3.1 An unreliable innate Teacher model
In order to simulate the auditory orienting response(AOR) of newborns and infants, we assume a simple innate neural circuit that can provide rough estimations of sound source location based on ILD cues. Because Lateral Superior Olive(LSO) neurons are known to be sensitive to ILD, we assume that the innate neural circuit consists of a noisy single LSO neuron and a inaccurate linear decoder. We refer to this model as a ”Teacher” in the following paper, because it can be used as a source of supervision for training more capable models later on.
show that a single LSO neuron’s response rate on ILD can be described by a sigmoid curve. The neuron is more inhibited when the sound at the contralateral ear is more intense and is more excited when the sound at the ipsilateral ear is more intense. In respect of the neural response variability,Tollin et al. (2008) showed that LSO neurons in cat’s brain are less variable than expected from a Poisson process.
Here we model an LSO neuron with logistical mean response rate over the sound source angle and fixed Gaussian variability for all input:
where is the sound source direction, is the mean firing rate over , is the maximum mean firing rate, is the actual firing rate in the current sample, is an Gaussian noise used to describe the stochastic behaviour of the LSO neuron.
Then a simple linear decoder is used to map the LSO neuron response to the sound source location.
Two examples of the LSO neuron and linear Teacher models are given in Figure 2. Notice that the tuning curves do not necessarily be symmetric at the midline(where ). And that the Teacher model may use a very inaccurate linear approximation in decoding. A Teacher’s estimation of the sound source direction can be very noisy (Figure 3
). This decoding noise comes from the neural response variability and may be further amplified during the encoding-decoding process. Variance of estimation resultis larger at far right/left direction than that near the midline. This result is compatible with the fisher information of sigmoidal response function.
The primary purpose of this Teacher model is to generate unreliable estimations of sound source which simulates the auditory orienting response(AOR) in newborns rather than providing a biologically plausible explanation of AOR. Actually there are many possible explanations for the variability of AOR. For example, the orienting error may also come from the unreliable control of muscles. This Teacher model is used only as a representation of a possible source of unreliable feedback, which facilitates our discussions on robust learning mechanisms.
3.2 Robust learning model
Now we consider how a Student model can learn a more accurate auditory space from an unreliable Teacher described in Section 3.1. We first describe assumptions on a learning process which allow simple interactions between the agent and environment, then we show how to only use the Teacher’s left/right feedback to approximate the gradient of a regression object function, and briefly discuss the convergence properties of this approximated learning method in the end.
the agent starts to learn with a blank Student model (with trainable parameters ) and an unreliable Teacher (with fixed parameters);
the sound source does not change its location during the short localising episode described bellow.
In each time step , after hearing the sound that comes from an unknown random location , the agent orients toward its first guess based on the Student. Now the new position of the sound source is
Then the Student ask the Teacher for only a feedback, which is the Teacher’s estimation about whether current position is on the left/right side of the sound source.
Then the Student adjusts its parameters according to and this learning episode terminates.
These assumptions are reasonable in real life since infants can orient to the sound source and many sound sources are relatively static or slow compared to the response time, such as a speaking human or a singing bird.
3.2.2 Approximated gradient
A common object function for supervised regression is based on the Euclidean distance between target and prediction
where is the index of the sampled learning episodes and is the size of the whole sample set. However, this kind of directly supervised regression can not be applied in our scenario because the real target value is not available. Besides, replacing with the unreliable Teacher’s estimation as the regression target will also be problematic since the Student will copy the Teacher’s biases(see Figure 6).
Now we motivate the usage of
as the object function, and more importantly the usage of the approximated gradient
The main motivation for this approximation is that it removes the dependence of from the computation of . In other words, this approximated gradient allows us to learn the continuous map with only left/right feedbacks, without requiring real value feedbacks. This exploit the observation that the Teacher is more reliable on distinguishing left/right instead of estimating the precise value of sound source location .
is designed to bound the extreme impact of outliers when minimizing the object function. One of the popular, corresponding to the Huber Loss object function (Huber, 1964), can be written as
where is a robustness control parameter called tuning constant. The gradient we use in Eq. 8 is essentially the same with the Huber Loss when . It is also possible to use as the gradient in our model when , given the true position . As a consequence, our model will also have similar robustness as Huber Loss even if the actual noises do not have zero expectation and equal variances.
We then combine this approximated gradient (Eq. 8
) with a Multiple Layer Perceptron(MLP) as the final model used to learn the non-linear map.
In the following paper, we refer to this model as Robust Learning model in contrast with mean-squared error based regression models, because (1) it requires only left/right feedback, therefore it is robust of the possible errors or bias of the Teacher on the precise value of (2) it shares similar robustness properties on the noisy data with M-estimators used in robust statistics.
3.2.3 Convergence issues
Stochastic Gradient methods(SG)(Bottou et al., 2016) has been widely used to optimize large scale artificial neural networks. Moreover, SG enables on-line learning of the neural network and has potential links with more biologically plausible learning mechanisms such as STDP (Bengio et al., 2015).
If trained with SG, convergence of the Robust Learning model will be the same with models using the exact gradient in Eq. 8, or equivalently models using the Eq.7 as their object function. This is because in SG, a stochastic gradient based on a (mini-batch of) sampled input is always used to adjust the parameter . SG only requires the stochastic gradient
to be an unbiased estimator of. Therefore, if the is an unbiased estimator of , convergence properties of SG hold. Although there is no general convergence guarantee of SG for non-convex functions, empirically, will converge to a good enough estimation near the real .
However, the innate Teacher may not always be unbiased. In other words, may not hold. It is reasonable to assume an innate Teacher with a biased linear decoder at the midline and perhaps an asymmetric LSO response curve at the same time(see green curves in Figure.2). If this is the case, we conclude that if
, at the same input , and
for any and , ,
then the learned prediction will converge to a biased result .
This is because
With Eq.8 we can see that this is equivalent to have as the object function, as long as is still used as the stochastic gradient in SG.
In other words, if the left/right supervision signal is biased, then the learned map of Robust Learning model will also be biased. This result is not surprising since the Student model doesn’t have any other source of information to verify the Teacher’s feedback. In order to address this problem, we introduce the reward signal from environment in Section.3.3.
3.3 Robust reinforcement learning model
We assume the environment provides rewards to the agent for successful sound source localisation. For example, infants can get food or water more quickly from their parents by orienting to the correct direction. These rewards can be used to supervise the learning of auditory space map. However, naive reinforcement learning models may fail to converge. In this section, we first introduce the reinforcement learning framework and rewards, then a basic policy gradient based model for sound localisation, and finally a model that incorporating a Teacher described in Section.3.1 to facilitate the reinforcement learning process.
3.3.1 Reinforcement learning framework
is the set of states. Sound source location is the state in time step .
is the set of actions. The orienting movement angle equals to the estimated localisation result in time step .
is a reward function based on state and action pairs, which is defined as
In other words, if the orienting direction is within a small range (defined by ) around the real location , the localisation is considered a success, therefore a positive reward signal is given to the agent. Otherwise a negative reward signal is given as the punishment of delay. In addition, we handle the boundary cases of extremely wrong estimation by increasing the punishment.
is usually a state transition probability matrix. But since we fix sound source location in one episode,is actually deterministic here, described as Eq.4. In addition, if , in boundary cases. It is straightforward to include noise in environment or agent in in future work.
is a discount factor for accumulated reward.
Each interacting episode terminates when the localisation is successful () or a maximum time step count has been reached.
The agent’s behaviour after receiving the ILD cues are defined by a policy , which is the auditory space map the agent want to learn.
The sum of discounted future reward from a state is defined as its return . An action-value function under a policy is defined as , which is the expected return at state after taking action and then following policy . These allow off-policy learning where parameters for current target policy can be adjusted with trajectories from other polices, such as Q-learning(Watkins and Dayan, 1992).
3.3.2 A Deterministic Policy Gradient model for sound localisation
Now we consider the reinforcement learning algorithm for the agent trying to localise a sound source location. Because the action set is continuous, we use policy gradient methods. Because currently it is not clear how biological brain can carry out an explicit memory required by batch learning algorithms, we focus on on-line algorithms. Because the auditory space map doesn’t require stochastic behaviours, we consider deterministic policies. These lead us to an actor-critic model using Deterministic Policy Gradient(DPG)(Silver et al., 2014) method.
The DPG algorithm extends Q-learning(Watkins and Dayan, 1992) to continuous action space and has been used together with deep neural networks in various continuous control tasks(Lillicrap et al., 2015). Here we also use two neural networks as function approximators for policy function and action-value function , which are called Actor and Critic. The Actor is actually equivalent to the target Student model in Section.3.2, which represent the auditory space map.
Parameters of the critic network can be trained based on the Bellman equation by minimizing
which is the temporal difference between original expectation and updated expectation after one-step observation, using normal back-propagation method. Parameters of the actor network
can also be trained using the chain rule with gradient approximation
This simpleness of DPG allows simple network structure and efficient end-to-end training.
3.3.3 Learn from both Teacher and rewards
However, a known problem of using non-linear neural networks as the function approximators is that convergence is not guaranteed. In practice, applying above actor-critic DPG model directly in many continuously control tasks usually can not converge, including our sound source localisation task here.
In order to address this problem, Mnih et al. (2015); Lillicrap et al. (2015) used an Experience Replay Buffer and Target Networks. However, this replay buffer requires random access to exact long-term memories of learning histories, which is hard to be carried out with biological neural networks.
Here we introduce an algorithm(Algorithm.1, Figure.4) that employs a Teacher model to address the same problem without a replay buffer. Ideally, in the beginning, the Teacher guides the Student(Actor) and Critic by giving examples about how to interact with the Environment–although the Teacher may be unreliable, it may still be useful to lead the Student and Critic to a stable zone in the parameter space in the beginning. The Student can also interact with the Environment by itself and therefore learn from its own trajectory. In this way, we can stabilize the learning process and eventually avoid the bias introduced by the Teacher at the same time.
With a completely random initialisation of parameters, the Student’s performance will be very poor in the beginning and gradually improved during learning, while the Teacher is always fixed. Then a question faced by the agent is how to select between the Student and Teacher in different phases. This is essentially a non-stationary 2-arms Bandit problem, if we consider only the history reward of the Student and Teacher, similar to the algorithm-selection problem in Féraud (2017) ; or a meta- reinforcement learning problem which has as its action set, if we take the context information into consideration. Here we use a simple Selector algorithm. The Selector maintains an averaged performance history variable for the Teacher, which is updated in according to
if the Teacher is selected to interact with the environment in current episode. is a smoothing parameter. A similar variable is also maintained for Student with parameter . In the beginning of each episode, the Selector will choose the Student to guide the orienting action if and vice versa. To encourage the learning process of Student in the beginning, we use a strategy similar to Greedy-: after making the decision based on , the Selector will change the decision to be choosing Student with a fixed probability .
4 Experiments results
4.1 Teacher models
We present two Teacher models here.
In the first Teacher model(unbiased Teacher A), we set , , for the LSO neuron in Eq.1 and Eq.2; , in Eq.3. The tuning curve and linear decoder are the green lines in Figure.2. We sampled 50 estimations of this Teacher model for each sound source location in the range of with step length(Figure.3). This Teacher’s estimations are noisy but unbiased about the left/right location of the sound source.
In the second model(biased Teacher B), we set , , for the LSO neuron in Eq.1 and Eq.2; , in Eq.3. The tuning curve and linear decoder are the red lines in Figure.2. The linear decoder is chosen to be very inaccurate for the LSO tuning curve. This Teacher’s estimations are not only noisy but also biased.
These two Teachers can describe the auditory orienting response in newborns: relatively more reliable about left/right difference but not accurate with the exact sound source location. We use them as sources of unreliable supervision in the following experiments.
4.2 Robust learning results
We use a 4-layer fully-connected feed-forward artificial neural network to represent the target auditory space map in the Student model. There is 1 input neuron for the ILD input in the first layer with Relu activation function, 1 output neuron for the location predictionin the 4th layer with linear activation function. For each of the two hidden layers, there are 256 neurons with Relu activation. All layers weight are regularized with L2 loss with 0.1 as the weight decay parameter. The network is trained with Adam optimizer(Kingma and Ba, 2014) with a initial learning rate of 0.001. We trained the network for 200k episodes in each experiments.
We also use a normal mean-squared-error object function for the same network as comparison. All the training configurations are the same, except for that the gradient is calculated with real value estimation from the Teacher model instead of a left/right feedback.
All estimators are tested with sound source locations in the range of with step length after learning.
For the unbiased Teacher in Section.4.1, the learning results are shown in Figure.5. Since the Teacher is an unbiased estimator of left/right location, the Student model in robust learning converge to a very accurate auditory space map. In contrast, the normal MSE regression copied the bias of the Teacher and therefore not able to converge to the real auditory space map.
4.3 Robust reinforcement learning results
For the environment, the reward , discount factor , reward range parameter , the maximum interaction steps in one episode is 2.
For the actor-critic model, we use a same 4-layer fully-connected feed-forward artificial neural network in Section.4.2 as the Student(Actor). The Critic is almost the same with Student, too, except for that the first input layer consists of two neurons taking as input. Both of them are trained with Adam optimizer with a initial learning rate 0.001. The maximum episode numbers are 300k for each experiment.
For the Selector, , , Greedy- parameter .
We use the biased Teacher in Section.4.1.
We also adopt the Replay Buffer used by Lillicrap et al. (2015); Mnih et al. (2015) for comparison. The buffer size is reduced from 100k in their original work to 100 due to biological plausible constrains. Similarly, the batch size together used with the Replay buffer is reduce from 64 to 8.
Results are shown in Figure.7. Naive DPG model(Silver et al., 2014) does not converge–showed by the the negative slope of the accumulated reward curve. Adding a small replay buffer into DPG does not help. However, using our robust reinforcement learning framework, the Teacher help stabilize the learning process and eventually the agent that can successfully localise the sound source–showed by the the positive slope of the accumulated reward curve in the end. Furthermore, since the robust reinforcement learning algorithm allows off-policy learning, it is easy to combine the replay buffer with our algorithm. This combined approach shows fastest convergence rate in the experiments.
Figure.8 shows the smoothed trend of selecting Student instead of Teacher along the learning process. In the beginning, the Student is selected using the Greedy- strategy; in the end, since the Student is more reliable than the Teacher, the Selector tends to always select Student–the correct choice.
Finally, the results of Robust Learning and Robust Reinforcement Learning are compared with the initial biased Teacher model in Figure.9.
5 Related work
Aytekin et al. (2008); Bernard et al. (2012); Chan et al. (2010); Wall et al. (2012); Xiao and Weibei (2016) studied the learning process of sound source localisation. However, these algorithms assumed explicit supervision signals, such as the integration of motor movement and an accurate feedback signal when the agent is facing directly to the sound source. In our models here, we relax these assumptions, relying only on an unreliable left/right feedback source or sparse reinforcement signals from the environment.
There are also studies on incorporating expert knowledge together with reinforcement learning process, which are similar to the usage of the Teacher model in our work. Inverse Reinforcement Learning algorithms (Ng et al., 2000) assume that the Teacher, which is usually a set of desired trajectory from a human expert, is optimal and learns the value function approximator by inferencing the motivation of the Teacher. Learning from Demonstration algorithms (Hester et al., 2017; Ross et al., 2011; Chernova and Thomaz, 2014) also assume an optimal expert as the source of supervision. In some cases(Hester et al., 2017; Mnih et al., 2015), direct supervised learning are used before allowing the agent to interact with the environment. Suay et al. (2016) also allows the usage of suboptimal supervision but does not take biological limitations into consideration. However, in our work, only a very unreliable Teacher is required as a possible source of supervision.
In fact, our approach of introducing a Teacher model to help stabilize the reinforcement learning process is similar with the Experience Replay Buffer(Mnih et al., 2015; Lillicrap et al., 2015) in essence. Both of them have to be used with off-policy learning algorithms. The difference is that the trajectories generated by the Teacher model is always independent with the target policy, while the Experience Replay Buffer maintains a set of historical trajectories of early versions of the same target policy approximator.
6 Discussions and future work
Hofman et al. (1998) showed that ”learning new spectral cues did not inference with the neural representation of the original cues” with experiments in which the spectral cues for sound source localisation of subjects are changed with modified pinnae. Our model implies that it is possible to switch among multiple one neural circuits for sound source localisation based on similar auditory cues. The Selector mechanism may not only be useful for selecting between a Teacher and Student model, but also for coordinate multiple neural sub-modules that has similar functions. Our results show that feedbacks from a similar sub-modules may also facilitate the adapting process of learning new spectral cues. This hypothesis may be studied by comparing the adapting speed to new pinnae which have different degrees of modification compared with the original pinnae.
More generally, neural mechanisms for a complex cognitive task, such as sound localisation, may be organized in the way of combining ”a bag of tricks”. Based on different context, a meta-algorithm selects among several similar neural sub-modules–different tricks in the bag. This meta-algorithm can be trained with reinforcement reward and gradient based methods(Williams, 1992; Schulman et al., 2015). For example, from this point view, different output actions in DQN(Mnih et al., 2015) for the Atari games are different tricks selected according the visual input context. This means that we can design a better Selector which takes not only historical performance of each sub-module but also current context or higher-level feedbacks into consideration. Similar mechanism can also be applied for decision making or selective attention(Mnih et al., 2014). Hierarchical networks for multiple complex tasks can be constructed with these modules and trained end-to-end with gradient based method. Our study here implies that similar sub-modules can be used to facilitate each other’s learning process in such hierarchical networks. It is interesting to further test our algorithm by extending to more complex tasks, such as high dimensional continuously controlling where learning with solely reinforcement reward is usually unstable and optimal supervised demonstrations are too expensive to collect.
In addition, this selection mechanism can also be viewed as a neural multiplexer in analogy with the multiplexer in digital circuits. Neural networks consist of such sub-modules, including more general stochastic computing graphs(Schulman et al., 2015), are computationally challenging for conventional von Neumann architecture hardware such as CPU and GPU. It is interesting to explore the possibility of using customized architectures based on FPGA to implement these models with lower computing time and energy budget. For example, it is possible to use higher numerical precision for sub-modules that is responsible for accurate continuous control and use lower precision for discrete selections.
It is also interesting to further study the neural correlates of the learning algorithms we used in this study, perhaps by implementing them with biologically plausible neuron models as a start point. Some difficulties of achieving this goal have already been avoided in the beginning, since we limited our model design with biological concerns. Rao and Sejnowski (2001) showed that temporal difference learning can be carried out with spike timing dependent plasticity. Seung (2003) showed the possibility of using the REINFOCE algorithm(Williams, 1992) to train an integrate-and-firing network with stochastic synaptic transmission. Rao (2010) used an actor-critic model similar to ours, with which similarities have been found between the time course of reward prediction error by the model and dopaminergic responses in the basal ganglia in a decision making task.
There are also some more straightforward improvements of current model. Such as taking other cues, including interaural time difference, and multiple frequency bands into consideration.
To sum up, our model shows that it is possible to learn an accurate auditory space map with binaural cues by combining limited supervisions from a unreliable (innate) neural response and sparse reinforcement reward, while none of these two supervisions alone is enough for yielding satisfying learning result. Our algorithms also have the potential to be generalized into hierarchical reinforcement learning scenarios and the potential to be implemented with biologically plausible neuron models.
- A sensorimotor approach to sound localization. Neural Computation 20 (3), pp. 603–635. Cited by: §5.
Towards biologically plausible deep learning. arXiv preprint arXiv:1502.04156. Cited by: §3.2.3.
- Sensorimotor learning of sound localization from an auditory evoked behavior. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pp. 91–96. Cited by: §5.
- Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838. Cited by: §3.2.3.
- Adaptive sound localization with a silicon cochlea pair. Frontiers in neuroscience 4. Cited by: §5.
Robot learning from human teachers.
Synthesis Lectures on Artificial Intelligence and Machine Learning8 (3), pp. 1–121. Cited by: §5.
- Algorithm selection for reinforcement learning. stat 1050, pp. 2. Cited by: §3.3.3.
- Infants’ orientation to lateral sounds from birth to three months. Child development, pp. 295–298. Cited by: §1.
- Mechanisms of sound localization in mammals. Physiological reviews 90 (3), pp. 983–1012. Cited by: §3.1.
- Learning from demonstrations for real world reinforcement learning. arXiv preprint arXiv:1704.03732. Cited by: §5.
- Relearning sound localization with new ears. Nat Neurosci 1 (5), pp. 417–421. Cited by: §1, §6.
- Robust estimation of a location parameter. The Annals of Mathematical Statistics 35 (1), pp. 73–101. Cited by: §3.2.2.
- Robust statistics. In International Encyclopedia of Statistical Science, pp. 1248–1251. Cited by: §3.2.2.
- Neural circuits underlying adaptation and learning in the perception of auditory space. Neuroscience & Biobehavioral Reviews 35 (10), pp. 2129–2139. Cited by: §3.1.
- Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.2.
- Early-blind human subjects localize sound sources better than sighted subjects. Nature 395 (6699), pp. 278. Cited by: §1.
- Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §3.3.2, §3.3.3, §4.3, §5.
- Development of binaural and spatial hearing. In Human auditory development, pp. 163–195. Cited by: §1.
- Two-dimensional sound localization by human listeners. The journal of the Acoustical Society of America 87 (5), pp. 2188–2200. Cited by: §1.
- Recurrent models of visual attention. In Advances in neural information processing systems, pp. 2204–2212. Cited by: §6.
- Human-level control through deep reinforcement learning. Nature 518 (7540), pp. 529–533. Cited by: §3.3.3, §4.3, §5, §5, §6.
- Newborn infants orient to sounds. Child development, pp. 431–436. Cited by: §1, §1.
- Algorithms for inverse reinforcement learning.. In Icml, pp. 663–670. Cited by: §5.
- Interaural level difference processing in the lateral superior olive and the inferior colliculus. Journal of neurophysiology 92 (1), pp. 289–301. Cited by: §3.1.
- Spike-timing-dependent hebbian plasticity as temporal difference learning. Neural computation 13 (10), pp. 2221–2237. Cited by: §6.
- Decision making under uncertainty: a neural model based on partially observable markov decision processes. Frontiers in Computational Neuroscience 4, pp. 146. Cited by: §6.
A reduction of imitation learning and structured prediction to no-regret online learning. In International Conference on Artificial Intelligence and Statistics, pp. 627–635. Cited by: §5.
- Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3528–3536. Cited by: §6, §6.
- Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron 40 (6), pp. 1063–1073. Cited by: §6.
- Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 387–395. Cited by: §3.3.2, §4.3.
- Learning from demonstration for shaping through inverse reinforcement learning. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 429–437. Cited by: §5.
- Reinforcement learning: an introduction. Vol. 1, MIT press Cambridge. Cited by: §3.3.1.
- Interaural level difference discrimination thresholds for single neurons in the lateral superior olive. Journal of Neuroscience 28 (19), pp. 4848–4860. Cited by: §3.1.
- The auditory system and human sound-localization behavior. Academic Press. Cited by: §2.
- Spiking neural network model of sound localization using the interaural intensity difference. IEEE transactions on neural networks and learning systems 23 (4), pp. 574–586. Cited by: §5.
- Q-learning. Machine learning 8 (3), pp. 279–292. Cited by: §3.3.1, §3.3.2.
- Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §6, §6.
- A biologically plausible spiking model for interaural level difference processing auditory pathway in human brain. In Neural Networks (IJCNN), 2016 International Joint Conference on, pp. 5029–5036. Cited by: §5.