Adapting Neural Models with Sequential Monte Carlo Dropout

10/27/2022
by   Pamela Carreno-Medrano, et al.
0

The ability to adapt to changing environments and settings is essential for robots acting in dynamic and unstructured environments or working alongside humans with varied abilities or preferences. This work introduces an extremely simple and effective approach to adapting neural models in response to changing settings. We first train a standard network using dropout, which is analogous to learning an ensemble of predictive models or distribution over predictions. At run-time, we use a particle filter to maintain a distribution over dropout masks to adapt the neural model to changing settings in an online manner. Experimental results show improved performance in control problems requiring both online and look-ahead prediction, and showcase the interpretability of the inferred masks in a human behaviour modelling task for drone teleoperation.

READ FULL TEXT

page 7

page 13

page 18

research
09/17/2021

Dropout's Dream Land: Generalization from Learned Simulators to Reality

A World Model is a generative model used to simulate an environment. Wor...
research
11/18/2016

Compacting Neural Network Classifiers via Dropout Training

We introduce dropout compaction, a novel method for training feed-forwar...
research
10/24/2022

GFlowOut: Dropout with Generative Flow Networks

Bayesian Inference offers principled tools to tackle many critical probl...
research
07/28/2021

Uncertainty-Aware Credit Card Fraud Detection Using Deep Learning

Countless research works of deep neural networks (DNNs) in the task of c...
research
04/18/2021

Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning

We introduce distributed NLI, a new NLU task with a goal to predict the ...

Please sign up or login with your details

Forgot password? Click here to reset