Adjustment formulas for learning causal steady-state models from closed-loop operational data

11/10/2022
by   Kristian Løvland, et al.
0

Steady-state models which have been learned from historical operational data may be unfit for model-based optimization unless correlations in the training data which are introduced by control are accounted for. Using recent results from work on structural dynamical causal models, we derive a formula for adjusting for this control confounding, enabling the estimation of a causal steady-state model from closed-loop steady-state data. The formula assumes that the available data have been gathered under some fixed control law. It works by estimating and taking into account the disturbance which the controller is trying to counteract, and enables learning from data gathered under both feedforward and feedback control.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2019

Dynamic monopolistic competition with sluggish adjustment of entry and exit

We study a steady state of a free entry oligopoly with differentiated go...
research
05/25/2020

How Training Data Impacts Performance in Learning-based Control

When first principle models cannot be derived due to the complexity of t...
research
06/19/2020

A Reinforcement Learning Approach for Transient Control of Liquid Rocket Engines

Nowadays, liquid rocket engines use closed-loop control at most near ste...
research
10/09/2016

Visual Closed-Loop Control for Pouring Liquids

Pouring a specific amount of liquid is a challenging task. In this paper...
research
11/21/2022

CONFIG: Constrained Efficient Global Optimization for Closed-Loop Control System Optimization with Unmodeled Constraints

In this paper, the CONFIG algorithm, a simple and provably efficient con...
research
08/28/2020

Causal blankets: Theory and algorithmic framework

We introduce a novel framework to identify perception-action loops (PALO...

Please sign up or login with your details

Forgot password? Click here to reset