Learning by Doing: Controlling a Dynamical System using Causality, Control, and Reinforcement Learning

by   Sebastian Weichwald, et al.

Questions in causality, control, and reinforcement learning go beyond the classical machine learning task of prediction under i.i.d. observations. Instead, these fields consider the problem of learning how to actively perturb a system to achieve a certain effect on a response variable. Arguably, they have complementary views on the problem: In control, one usually aims to first identify the system by excitation strategies to then apply model-based design techniques to control the system. In (non-model-based) reinforcement learning, one directly optimizes a reward. In causality, one focus is on identifiability of causal structure. We believe that combining the different views might create synergies and this competition is meant as a first step toward such synergies. The participants had access to observational and (offline) interventional data generated by dynamical systems. Track CHEM considers an open-loop problem in which a single impulse at the beginning of the dynamics can be set, while Track ROBO considers a closed-loop problem in which control variables can be set at each time step. The goal in both tracks is to infer controls that drive the system to a desired state. Code is open-sourced ( https://github.com/LearningByDoingCompetition/learningbydoing-comp ) to reproduce the winning solutions of the competition and to facilitate trying out new methods on the competition tasks.


Decoupled Data Based Approach for Learning to Control Nonlinear Dynamical Systems

This paper addresses the problem of learning the optimal control policy ...

Causal Reinforcement Learning using Observational and Interventional Data

Learning efficiently a causal model of the environment is a key challeng...

A Control-Model-Based Approach for Reinforcement Learning

We consider a new form of model-based reinforcement learning methods tha...

Battlesnake Challenge: A Multi-agent Reinforcement Learning Playground with Human-in-the-loop

We present the Battlesnake Challenge, a framework for multi-agent reinfo...

From Pixels to Torques: Policy Learning with Deep Dynamical Models

Data-efficient learning in continuous state-action spaces using very hig...

Are All Vision Models Created Equal? A Study of the Open-Loop to Closed-Loop Causality Gap

There is an ever-growing zoo of modern neural network models that can ef...

MineRL Diamond 2021 Competition: Overview, Results, and Lessons Learned

Reinforcement learning competitions advance the field by providing appro...

Please sign up or login with your details

Forgot password? Click here to reset