A statistical learning strategy for closed-loop control of fluid flows

04/11/2016
by   Florimond Guéniat, et al.
0

This work discusses a closed-loop control strategy for complex systems utilizing scarce and streaming data. A discrete embedding space is first built using hash functions applied to the sensor measurements from which a Markov process model is derived, approximating the complex system's dynamics. A control strategy is then learned using reinforcement learning once rewards relevant with respect to the control objective are identified. This method is designed for experimental configurations, requiring no computations nor prior knowledge of the system, and enjoys intrinsic robustness. It is illustrated on two systems: the control of the transitions of a Lorenz 63 dynamical system, and the control of the drag of a cylinder flow. The method is shown to perform well.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 15

page 16

05/18/2018

Deep Dynamical Modeling and Control of Unsteady Fluid Flows

The design of flow control systems remains a challenge due to the nonlin...
03/30/2021

Learning Robust Feedback Policies from Demonstrations

In this work we propose and analyze a new framework to learn feedback co...
04/16/2020

Data-Driven Robust Control Using Reinforcement Learning

This paper proposes a robust control design method using reinforcement-l...
02/08/2015

From Pixels to Torques: Policy Learning with Deep Dynamical Models

Data-efficient learning in continuous state-action spaces using very hig...
01/09/2020

Closed-loop deep learning: generating forward models with back-propagation

A reflex is a simple closed loop control approach which tries to minimis...
10/16/2020

Hierarchical Reinforcement Learning for Optimal Control of Linear Multi-Agent Systems: the Homogeneous Case

Individual agents in a multi-agent system (MAS) may have decoupled open-...
06/04/2020

Optimization and passive flow control using single-step deep reinforcement learning

This research gauges the ability of deep reinforcement learning (DRL) te...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

While the design and capability of aircraft, and more generally of complex systems, have significantly improved over the years, closed-loop control can bring further improvement in terms of performance and robustness to non-modeled perturbations. In the context of flow control, closed-loop control however suffers from severe limitations preventing its use in many situations. As a paradigmatic example, a typical turbulent flow involves both a large range of spatial scales and exhibits a rich and fast dynamics. High frequency phenomena hence require a control command fast enough to adjust to the current state of the quickly evolving flow system. Indeed, frequencies of interest can routinely lie over 1 kHz, leading to a very short period of time for the controller to synthesize the command based on its knowledge of the state of the system.

While flow manipulation and open-loop control are common practice, much fewer successful closed-loop control efforts are reported in the literature. Further, many of them rely on unrealistic assumptions. For example, Model Predictive Control (MPC) approaches require very significant computational resources to solve the governing equations in real-time. If a Reduced-Order Model (ROM) is employed, as is common practice to alleviate the CPU burden, one often needs to observe the whole system for deriving the ROM as, for instance, the velocity or pressure fields with Proper Orthogonal Decomposition (POD), see Gerhard2003 ; Bergmann2008 ; Ma2011 ; Joe2011 ; Mathelin2012 ; Cordier2013 . Hence, flow control with this class of approaches is restricted to numerical simulations or experiments in a wind tunnel equipped with sophisticated visualization tools such as Particle Image Velocimetry (PIV).

This paper discusses a practical strategy for closed-loop control of complex flows by alleviating the limitations of current methods. The present work relies on a change of paradigm: we want to derive a general nonlinear closed-loop flow control methodology suitable for actual configurations and as realistic as possible. No a priori model, nor even a model structure, describing the dynamics of the system is required to be available. The approach proposed is data-driven only, with the sole information about the system given by scarce and spatially-constrained sensors. The method then exploits statistical learning methods.

This is the framework one typically deals with in practical situations where the amount of information on the system at hand is limited and usually comes from a few sensors located at the boundary of the fluid domain, e.g.

, on solid surfaces. The resulting information takes the form of short time-dependent vectors with as many entries as sensors.

Among the few earlier efforts relying on streaming measurements from a few sensors, a trained neural network using surface measurements is employed to reduce the drag of a turbulent channel flow with an opposition control strategy in

Lee1997 . In Kegerise2007 , pressure sensors and an auto-regressive model (AutoRegression with eXogenous inputs, ARX) are used to reduce flow-induced cavity tones. An autoregressive approach is also followed in Huang_Kim_2008 and Herve2012

, while a genetic programming technique is adopted in

Gautier2015 to control a separated boundary layer. Interested readers may refer to Brunton2015 for a comprehensive review of the topic. The present approach aims at deriving an efficient, yet robust, nonlinear closed-loop control method compliant with actual situations. Among its distinctive features compared to other methods is a combination of both performance and fast learning.

To facilitate learning about the system dynamics from the time-dependent measurements, and the subsequent derivation of a control strategy, the problem must be amenable to a finite dimension. Hence, one needs to discretize the infinite-dimensional time-series of the sensors’ information. To this end, the streaming data are convolved with a kernel which should result in a discrete image space. Locality-Sensitive Hash (LSH) functions are used for that purpose, Slaney2008 , which results in a low-dimensional discrete state space. Transitions from state to state in this discrete space describe the dynamics, Kaiser2014 , and allow the analysis to learn, and update in real-time, a Markov process model of the system. A suitable discretization of the dynamics allows the derivation of a reinforcement learning-based control strategy of the identified Markov process model of the system. The control of Markov processes is a mature field, Mandl1974 and reinforcement learning, Watkins1992 ; Gosavi2011 , is a suitable class of methods for the control of Markov processes, see for instance Lin1999 ; Gadaleta1999 for the control of 1-D and 2-D chaotic maps. As will be seen in the application examples below, the resulting control strategy is data-driven only, intrinsically robust against perturbations in the flow and does not require significant computational resources nor prior knowledge of the flow. The proposed approach is experiment-oriented and on-going efforts are carried-out to demonstrate it on an experimental open cavity flow in a turbulent regime. This will be the subject of a subsequent publication.

The paper is organized as follows. The framework and basics of how hash functions are used to generate a low-dimensional state space are discussed in Section 2. Section 3 is concerned with learning an efficient control strategy for the system modeled as a stochastic process living in a small dimensional space. The resulting control strategy is illustrated and discussed in the case of the control of a Lorenz 63 system and the drag reduction of a cylinder in a two-dimensional flow in Section 4. Concluding remarks close the paper in Section 5.

2 Hash functions for reduced order modeling

2.1 Preliminaries

Consider a dynamical system evolving on a manifold :

with the state of the system and the flow operator. Let be a sensor function. In the sequel, the number of sensors will be taken to be one but generalization to more sensors is immediate. The observed data is defined as .

Let be the sampling rate of the measurement system. Sampling has to be fast enough to capture the small time scales of the dynamics of . The data coming from the sensor are embedded in a reconstructed phase space :

The correlation dimension of

is estimated from the time-series, for instance using the Grassberger-Proccacia algorithm,

Grassberger1983 . It allows the definition of the embedding dimension as, at least, twice the correlation dimension. Under mild assumptions, this resulting embedding dimension ensures there is a diffeomorphism between the phase space and the reconstructed phase space, Takens1981 , so that is an observable on the system.

2.2 Hash functions

A hash function is any function that can be used to map an entry to a key . Since the key is an integer, hash functions effectively result in a discrete image space of . Hash functions are often used to efficiently discriminate two different entries so that slightly different input data should result in a large variation of the associated key, Carter1977 . An important objective in choosing the hash function is to avoid collisions, i.e., when two different entries are associated with the same key.

Conversely, the need for identification of similar entries in large databases has led to the use of the Locality-Sensitive Hash functions (LSH) Andoni2006 ; Slaney2008 . In contrast with most hash functions, they are designed to promote collisions when two entries are close to each other. The idea is that, if two points are close in , they should be likely to remain close after a projection on a lower dimensional subspace. The Johnson-Lindenstrauss lemma (JLL), Johnson1984 , provides useful results to reach this objective and motivates the use of LSHs. Specifically, the JLL provides probabilistic guarantees of the near preservation of relative distances between objects in high-dimension after projection onto random low-dimensional spaces.

Let , , be a test vector. The function is a LSH, Andoni2006 :

(1)

where is the floor operator, a quantization length and is such that , , , with the unit ball of in the sense of . As an illustration, following  Johnson1984 ; Slaney2008 , if two observables and are such that , with

, they are associated with the same key with a probability

larger than . On the other hand, the probability that two distant points and appear close to each other in the sense of is a function of the angle between and and is smaller than .

Let us illustrate the LSH with a simple example. Let and and be three vectors from such that:

Let be a unit-norm normal Gaussian vector of , and . The different vectors are drawn in Fig. 1. Upon processing with the LSH, the keys associated with the three vectors and are:

and are closer to each other, in the sense of their -distance, than to , and indeed, vectors and are associated with the same key while is associated with .

Figure 1: Plot of the vectors and (solid lines) and (dashed line). The random test vector is plotted in red.

To discriminate false neighbors with higher probability, projections on several low-dimensional subspaces can be used. Consider the hash function made of concatenation of keys . Many choices can be made for the test vectors . For instance, they could be the principal axes of the manifold on which the observable

evolves, or may be randomly selected from a Gaussian distribution. Keys (

i.e., objects in the image space of ) generate a Voronoï paving of the observable space. Each key is associated with a cell, or state , and close observables are associated with the same key.

Coarseness of the paving depends on the quantization lengths and the number of test vectors. The set quantifies the minimal length of the cells111Two vectors and are associated with two different keys if their -norms differ by more than , hence the minimal length of the cells. and, as increases, the cardinality of the image space of decreases. On the other hand, increasing is equivalent to refining the description, i.e., increasing the cardinality of .

3 State-driven control

Thanks to the hash function, the original infinite-dimensional system is approximated as a discrete stochastic process whose state space is spanned by the keys. The system is observable in real-time in this discrete space since evaluating the hash function with the streaming sensor data can be done at no computational cost. Under a discrete control command to the actuators, hereafter termed the action, the dynamics of the system will be modified and the goal is to find the action which makes the physical system at hand satisfy the control objective, say, a target dynamics. The discretized description of the system with the hash function naturally lends itself to a Markovian framework, Novikov1989 ; Renner2001 , which is adopted below.

3.1 Markov Decision Process

Let be the probability of transition from a state to under an action , being a uniformly discretized space of possible control actions. Here again, the analysis is restricted to one actuator but generalization to more actuators is immediate. Actions index the discrete commands available to the controller. Define and . Similarly, . To each transition-action is associated a transition reward (TR) 222We use a slight abuse of notation as we consider and . Both and are

three-way tensors.

. The goal of the control strategy is hence to identify the optimal policy , which describes the best action to apply when in a given state so as to maximize the value, , defined as the sum of the future expected rewards of the policy when starting at state :

(2)

where is the expectation operator over all possible sequences under the policy . Here, is the discount rate, . By weakly accounting for TRs occurring far in the future, the discounted rate effectively introduces a time horizon. The values express the expectation of the cumulative TRs of a given policy. Eq. (2) may be reformulated as, Mandl1974 :

(3)

where is the mean TR in state under the policy . This corresponds to the celebrated Bellman equation, Bellman1952 , written in the discrete settings.

3.2 Identification of the rewards

The transition rewards are unknown a priori and depend on the control objective. As perhaps more familiar to the reader, one could define the cost as the opposite of the reward. Consequently, to learn relevant rewards, consider the cost function associated with the control objective:

(4)

with the measure of performance, e.g., the drag coefficient in the included example. The contribution of the control intensity to the cost with respect to the measure of performance is weighted by .

Let the immediate rewards represent, at any given time , the negative of the contribution to the cost of the transition from the present state to state , under an action :

(5)

is then high when the performance associated with the controlled system is good, and low otherwise. Instead of the reward associated with a particular trajectory in the original, infinite-dimensional, phase space, the average, trajectory-independent, reward associated with all trajectories leading to a given transition should be determined. The ergodicity assumption postulates the equivalence of a temporal and an ensemble average, i.e., here

. Under this assumption, the mean transition rewards, in the sense of the probability distribution of

, are finally determined during the learning stage via:

(6)

with the learning rate. For the TRs to be associated with the average contribution to the cost function of the transition-action , the learning rate is set to , where corresponds to the number of times the transition-action has occurred so far during the learning process.

3.3 Reinforcement learning

To derive a control strategy, one needs to determine a policy which would give the best control action given the current state of the system, as known through the hash function. The rewards associated with an action when in a given state have been learned and this information is now used to derive a control policy to drive the system along transitions and actions associated with the largest rewards.

When the probabilities of transition from a state to another are known, the policy may be identified by means of a dynamic programming algorithm, Bellman1952 ; Mandl1974 . However, the distribution of transition probabilities and values are difficult to reliably evaluate since neither the control policy nor the transition probabilities are stationary during the learning stage. In this situation, Reinforcement Learning is a suitable class of methods for the control of Markov processes, Watkins1992 ; Powell2007 ; Lewis2009 .

Among these methods, the Q-learning approach consists in relying on the estimation of the Q-factors, or action-values, which evaluate the expected reward of a state-action combination:

(7)

where is the empirical mean TR associated with applying the action when in the state . From Eq. (3), the action-value is hence the expected average reward of applying while in state and subsequently following the policy .

As stated previously, the transition probabilities can not be accurately estimated. However, an iterative estimation of the Q-factors can be derived, Watkins1992 ; Lewis2009 ; Gosavi2011 . Letting the initial Q-factors be given, the Q-factor associated with a state and an action can be updated at time as follows:

(8)

where is a learning factor. It can be shown that converges to the true Q-factors when if the following conditions hold, Watkins1992 : i) the TRs are bounded, ii) , , iii) and iv) .

The action-value will increase when the reward associated with the 2-tuple is good, i.e., such that , and decrease otherwise. To learn a good policy, the system in different states is stimulated with different actions to estimate the Q-factors. The control policy is the action which, for each state, is associated with the largest Q-factor, Watkins1992 ; Lewis2009 .

3.4 Robustness

A critical property of any realistic and practically useful control strategy is its resilience with respect to unpredictable events. These events encompass exogenous/external perturbations to the flow, sensor noise, actuator noise, etc. A control strategy robust to these perturbations is passive and brings the flow back to its nominal controlled state after the perturbation is gone. As will be demonstrated in the application example below, see section 4.2.3, the present strategy is robust thanks to two properties:

First, the state of the system, as estimated via the LSH, is discrete. The locality sensitivity property of LSHs implies that a small perturbation of the measurement vector will likely result in the same key as the unperturbed measurement, . More precisely, the state estimation is strictly robust to any perturbation of energy with probability .

Second, the proposed method is also robust against perturbations of the flow dynamics. The learning of the control strategy relies on an ensemble average of rewards and Q-factors and then results in an optimal control policy in the ensemble mean-sense. Hence robustness against noise.

4 Results

4.1 Dynamical system

To illustrate the methodology discussed above, we now consider the Lorenz 63 system Lorenz1963 , defined by the following equations:

(9)

with the common parameters . A chaotic attractor defines the dynamics, structured as two “wings” around two fixed points of the system. The state vector evolves on a wing, turning around a fixed point, before eventually jumping to the other wing. is the action on the component of . The chosen control objective is to remain on the “left” wing, . The measure of the performance is given as the distance between the state vector and the left fixed point of the system, , with .

The observable is constructed from the time series of , sampled every time units. In this illustration, only is different from zero so that the Lorenz system is controlled only through the time-derivative of the first component of its state vector. It mimics a realistic scenario, where sensors and actuators are distinct. Actuation values lie between with a discretization step of . This leads to different actuations to control the system. The cost function to minimize is given by Eq. (4) and the control command is penalized with .

The embedding dimension is set to . The first three singular vectors of a Hankel matrix, built on measures , are used as the test vectors of the LSHs. The quantization length is set to and leads to a cardinality of the set of keys of .

Figure 2: (a): Identified clusters (color-coded) for the Lorenz 63 system. (b): Mean state transition probabilities. The transition matrix has been iterated five times for readability. (c): Transition rewards associated with a state transition, for the NULL command. The TRs have been rescaled between and and the colormap saturated, in order to improve readability. (d): Q-factors.
Figure 3: Uncontrolled Lorenz system in dashed black. Controlled response in solid red. The identified strategy is applied at time . (a): State phase. (b): component.

Once the test vectors are determined from the Hankel matrix, the states are to be identified via the LSHs, see Fig. 2. It is hence possible to infer both the dynamics of the system from the state transitions and the rewards associated with the objective. The mean transition probabilities from one state to another, after five time increments in order to improve visualization, are plotted in Fig. 2. As expected for the Lorenz system, the dynamics are seen to be driven by two main cycles, see Fig. 4. There are two statistically dominant sequences of transitions cycling around all states between 1 (resp. 5) and 4 (resp. 8), corresponding to the right (resp. left) wing of the Lorenz attractor. States 9 to 11 and 12 to 14 represent sub-transitions from a main cluster to another one.

1

2

3

4

6

7

8

5

.9.

10

11

12

13

14

1

2

3

4

6

7

8

5

.9.

10

11

12

13

14
Figure 4: Transitions between clusters for the Lorenz system. Only the first two most probable transitions are represented. The dark red arrows represent the most probable transitions from one cluster to another, while the pale red represents the second most probable transitions.

From the observations of the system, one learns the transition rewards matrix and the Q-factors matrix by repeatedly applying Eq. (6) and Eq. (3.3). The transition matrix is sparse, see Fig. 2, and many transitions hence hardly ever occur. Thus, the learned TRs and the Q-factors matrices are also sparse, see Fig. 2 and Fig. 2. The difference between one wing and the other is clearly discernible in the rewards, the first block being associated with negative rewards while the second block is associated with positive rewards. By construction of the rewards, the use of a strong (expensive) command is also discouraged, as can be seen in the Q-factors, Fig. 2. The identified control strategy succeeds in staying on the left wing, see Fig. 3.

4.2 Two-dimensional numerical flow

To further illustrate the methodology discussed above, consider a 2-D laminar flow around a circular cylinder, in two situations: a fixed, or random in time angle of the incoming flow.

4.2.1 Configuration of the test case

The considered Reynolds number of the flow is based on the cylinder diameter and the upstream flow velocity. Details of the simulation can be found in Lemaitre2003 . The observable is constructed from the time series of a single pressure sensor, sampled every time units. This sensor is located on the cylinder surface at an angle of degrees from the upstream stagnation point when the angle is fixed. In this case, the pressure signal oscillates with a period of about 9 time units and half as much for the drag.

Actuation, i.e., control of the flow is achieved via blowing or suction through the whole cylinder surface. Actuation values lie within , with a discretization step of , ranging from suction to blowing. It leads to different actuations to control the system. The cost function to minimize is given in Eq. (4), with . It corresponds to the drag ( is the drag coefficient) induced by the cylinder penalized with the intensity of the command. The penalty was chosen so that the resulting command remains within the operating range of the actuator. The embedding dimension is set to . The first five singular vectors of a Hankel matrix, built on measures , are used as the test vectors of the LSHs. The quantization length is set to and leads to a cardinality of the set of keys of .

Two situations are considered to illustrate the proposed control algorithm. First, the incident angle of the incoming flow is kept constant at its zero nominal value, Sec. 4.2.2. Alternatively, in Sec. 4.2.3, the angle is a random process, smooth in time, whose realizations range from -20 to 20 degrees around the nominal value with a uniform probability distribution, see Fig. 5.

Figure 5: (a): Transition rewards associated to the transition , with respect to the command. The TRs have been rescaled between and . The drawings illustrate the actuation (suction or blowing). (b): Time-evolution of the incident angle of the incoming flow, for the noiseless (red) and noisy (black) case.

4.2.2 Noiseless case: nominal incidence

Once the test vectors are determined from the Hankel matrix, and the states are identified via , it is possible to infer the dynamics of the system and the associated rewards. The learned transition reward is plotted in Fig. 5. It clearly exhibits a maximum which corresponds to the best compromise between a sufficient decrease of while a reasonable increase of . The estimated transition probabilities from one state to another are plotted in Fig. 6. In the present case, the dynamics are seen to be rather periodic, with a statistically dominant sequence of transitions cycling around all states between 1 and 8, see Fig. 7. Other states are “transient” states between two stages of the main cycle. For instance, a transition from state to state can occur with a low probability but the next transition will be to state (or, less likely, state ). Further analysis of the transition matrix can give more insights into the dynamics and on the relevance of other sequences, e.g., via stability analysis, Kaiser2014 , symbolic dynamics based on keys, Lusseyran2008 , or the Kullback-Leibler entropy, Kaiser2014 .

From the observations of the system, one learns the transition rewards matrix and the Q-factors matrix by repeatedly applying Eq. (6) and Eq. (3.3). As in the Lorenz system case (see above in Sec. 4.1), the transition matrix is sparse, as can be appreciated from Fig. 6, and many transitions hence hardly ever occur. Thus, the learned TRs and the Q-factors matrices are also sparse, see Fig. 6 and Fig. 6.

Figure 6: (a): Mean state transition probabilities. (b): Transition rewards associated with a state transition, for the NULL command. The TRs have been rescaled between and and the colormap saturated, in order to improve readability. (c): Q-factors.

1

2

3

4

5

6

7

8

.9.

10

11

12

13

15

16

1

2

3

4

5

6

7

8

.9.

10

11

12

13

15

16
Figure 7: Transitions between clusters for the cylinder flow. Only the first two most probable transitions are represented. The dark red arrows represent the most probable transitions from a cluster to another one, while the pale red represents the second most probable transitions.

The performance of the control strategy is assessed in terms of the performance indicator defined as the difference between the time-averaged cost and the time-averaged cost with a NULL strategy (), . The performance indicator is scaled with the difference between and the cost of an optimal time-invariant control strategy:

(10)

The evolution of as a function of the learning effort, defined as the number of measurements during the learning phase, is plotted in Fig. 8. The performance indicator increases with the amount of learning for computing the TRs, but quickly reaches a plateau. The TR matrix being sparse, the information of the learning stage focuses onto a limited number of unknowns and the Q-factors quickly converge.

As expected, the values , computed with the learned policies are larger than the values computed with the ORA policy, on average, see Tab. 1.

Policy NULL ORA Present strategy
Average value -15.8 9.5 13.2
Table 1: Average expected value associated with the NULL, ORA and present strategy policies. The learning effort for the rewards (resp. the Q-factors) was .
Figure 8: Illustration of the control policy. (a): Performance indicator with respect to the learning effort of the transition rewards matrix . Colors (from blue to red) encode different efforts in learning . (b): Actuation as a function of time, . (c): Time-evolution of the drag coefficient under three different control policies: zero command (NULL, black), ORA strategy (blue), and the optimal policy from the present approach (red).

The resulting control command is plotted in Fig. 8 and is seen to exhibit an oscillatory behavior. The associated drag coefficient of the cylinder flow is plotted in Fig. 8 for the present approach as well the NULL and the ORA strategy. The identified control is seen to perform well. The drag coefficient for the identified control resembles the one given by the ORA control.

To illustrate the impact of the control on the system, the time-averaged pressure field around the cylinder is plotted in Fig. 9, both with and without control. When control is applied, the pressure difference between the upstream stagnation point and the rear cylinder vicinity is significantly reduced. Further insights about the control effect can be gained by examining Fig. 10 where the time-averaged streamlines are plotted. With control, i.e., with a negative normal velocity at the cylinder surface, small recirculation bubbles significantly weaken, or even vanish, and the separation of the boundary layer from the cylinder surface is postponed further downstream, reducing the effective width of the wake. The length of the recirculation bubble drops from in the case of the NULL command (no control), see Fig. 9, to when the command identified by the present control approach is applied, see Fig. 9. The length of the recirculation bubble is hence reduced by . Suction at the cylinder surface tends to slightly increase the viscous drag (thinner boundary layers with larger velocity gradients) but significantly decreases the width of the wake and the pressure defect at the back of the cylinder, producing a lower drag.

Figure 9: Mean pressure field. (a): for the NULL command. (b): for the present identified control strategy. Note that only a part of the computational domain is plotted.
Figure 10: Vorticity field and streamlines for the mean field. (a): for the NULL command. (b): for the present identified control strategy. Note that only a part of the computational domain is plotted. Colors encode the sign of the vorticity.

4.2.3 Measurements with noise

To investigate the robustness of our control strategy, the angle of the incident flow is made to vary randomly between and degrees around its nominal value with a uniform probability distribution and a smooth time-evolution, see Fig. 5 for a typical realization. This mimics a typical class of perturbations to the system at hand and allows the determination of the robustness of the control strategy.

As in the noiseless case considered in Sec. 4.2.2 above, the transition probability, rewards and Q-factors matrices are all sparse (not shown for sake of brevity). The performance indicator of the derived policy is depicted in Fig. 11. In contrast with the noiseless case, is here strongly dependent on the effort in learning. At early stages of the learning process, the influence of the noise is prominent and the dynamics are not properly captured, which ultimately leads to poor control strategies.

Similarly as in Sec. 4.2.2, the values , computed from the identified strategy, are larger, on average, than the values from the ORA policy, see Tab. 2. This result indicates that the present approach is able to uncover an efficient control strategy, here achieving a significantly lower cost than an optimal time-invariant control strategy, even in a noisy environment.

Policy NULL ORA Present strategy
Average value -6.8 3.0 8.0
Table 2: Average expected value associated with the NULL, ORA and present strategy policies. The learning effort for the rewards (resp. the Q-factors) was .

The resulting control command is plotted in Fig. 11 and is seen to exhibit an oscillatory behavior. The associated drag coefficient of the cylinder flow is plotted in Fig. 11 for the present approach as well the NULL and the ORA strategy. Again, the identified control is seen to perform well and resembles that given by ORA.

Figure 11: Illustration of the control policy. (a): Performance indicator with respect to the learning effort of the transition rewards matrix . Colors (from blue to red) encode different efforts in learning . (b): Actuation as a function of time, . (c): Time-evolution of the drag coefficient under three different control policies: zero command (NULL, black), ORA strategy (blue), and the optimal policy from the present approach (red).

5 Concluding remarks

This work has presented an experiment-oriented control strategy which does not require any prior knowledge of the physical system to be controlled nor significant computational resources. This strategy allows the learning of a control policy from scarce and point sensors with very limited information on the system at hand. From the sensors’ streaming data, a phase space is built using hash functions as a kernel the measurements are convoluted with in real-time. This allows the derivation of a discrete, low-dimensional, space in which the dynamics of the system are approximated. Ensemble-averaged rewards associated with transitions from one discrete state to another are estimated during an online learning sequence. They are directly related to the control objective and tend to sort state transitions based on their impact on the control cost function. A reinforcement learning algorithm is then used to derive the optimal control policy, promoting transitions associated with good rewards.

The resulting method is compliant with actual configurations where instrumentation is limited and spatially-constrained. Owing to the use of kernel hash functions and effective state-aggregation in a discrete phase space, the method runs in real-time and allows closed-loop control. Its discrete and ensemble-averaged nature also brings intrinsic robustness against perturbations in the flow.

This approach has been illustrated on two test cases. The control of a Lorenz system is achieved, by measuring only one component. To mimic a realistic scenario, the actuation is done on a different component. The method is also illustrated by the two-dimensional flow around a circular cylinder. Measurements were provided by a single wall-mounted pressure sensor and actuation was achieved by blowing or suction of fluid at the cylinder surface. The drag coefficient was significantly reduced, reaching essentially the same performance as the control policy given by the ORA strategy.

More generally, the method presented in this work is readily applicable to physical systems with a causal link between the actuators and the cost functional as evaluated from the sensors. It does not rely on a prior model and instead learns directly from observing the system under stimulation by the actuators, hence being suitable for practical configurations. An identified limitation of the proposed approach is the number of keys one needs to consider if the relevant dynamics of the system is very rich (large ) and/or a very fine control law is required (large ). In this situation, the number of entries of the matrix grows and hence possibly requires more data for learning. Approximation techniques have however been used to alleviate this limitation Alex_etal_MIT .

Current efforts concern the experimental control of the turbulent flow over an open cavity using the present approach and will be the subject of a subsequent publication. Further developments focus on the convolution kernels, an improved evaluation of suitable rewards and milder assumptions for the reinforcement learning of the control policy.

References

  • [1] J. Gerhard, M. Pastoor, R. King, B.R. Noack, A. Dillmann, M. Morzynski, and G. Tadmor. Model-based control of vortex shedding using low-dimensional galerkin models. AIAA J., 4262(2003):115–173, 2003.
  • [2] M. Bergmann and L. Cordier. Optimal control of the cylinder wake in the laminar regime by trust-region methods and pod reduced-order models. J. Comp. Phys., 227(16):7813–7840, 2008.
  • [3] Z. Ma, S. Ahuja, and C.W. Rowley. Reduced-order models for control of fluids using the eigensystem realization algorithm. Theo. Comp. Fluid Dyn., 25(1-4):233–247, 2011.
  • [4] W.T. Joe, T. Colonius, and D.G. MacMynowski. Feedback control of vortex shedding from an inclined flat plate. Theo. Comp. Fluid Dyn., 25(1-4):221–232, 2011.
  • [5] L. Mathelin, L. Pastur, and O. Le Maître. A compressed-sensing approach for closed-loop optimal control of nonlinear systems. Theo. Comp. Fluid Dyn., 26(1-4):319–337, 2012.
  • [6] L. Cordier, B.R. Noack, G. Tissot, G. Lehnasch, J. Delville, M. Balajewicz, G. Daviller, and R.K. Niven. Identification strategies for model-based control. Exp. Fluids, 54(8):1–21, 2013.
  • [7] C. Lee, J. Kim, D. Babcock, and R. Goodman. Application of neural networks to turbulence control for drag reduction. Phys. Fluids, 9(6):1740–1747, 1997.
  • [8] M.A. Kegerise, R.H. Cambell, and L.N. Cattafesta. Real time feedback control of flow-induced cavity tones - part 2: Adaptive control. J. Sound Vib., 307:924–940, 2007.
  • [9] S.-C. Huang and J. Kim. Control and system identification of a separated flow. Phys. Fluids, 20(10):101509, 2008.
  • [10] A. Hervé, D. Sipp, P.J. Schmid, and M Samuelides. A physics-based approach to flow control using system identification. J. Fluid Mech., 702:26–58, 2012.
  • [11] N. Gautier, J.-L. Aider, T. Duriez, B.R. Noack, M. Segond, and M.W. Abel. Closed-loop separation control using machine learning. J. Fluid Mech., 770:442–457, 2015.
  • [12] S. Brunton and B. Noack. Closed-loop turbulence control: Progress and challenges. App. Mech. Rev., 67(5):050801, 2015.
  • [13] M. Slaney and M. Casey. Locality-sensitive hashing for finding nearest neighbors [lecture notes]. IEEE Signal Process. Mag., 25(2):128–131, 2008.
  • [14] E. Kaiser, B.R. Noack, L. Cordier, A. Spohn, M. Segond, M. Abel, G. Daviller, J. Östh, S. Krajnović, and R.K. Niven. Cluster-based reduced-order modelling of a mixing layer. J. Fluid Mech., 754:365–414, 9 2014.
  • [15] P. Mandl.

    Estimation and control in markov chains.

    Adv. App. Probab., pages 40–60, 1974.
  • [16] C. Watkins and P. Dayan. Q-learning. Mach. Learn., 8(3-4):279–292, 1992.
  • [17] A. Gosavi. Target-sensitive control of markov and semi-markov processes. Int. J. Control Autom., 9(5):941–951, 2011.
  • [18] C.T. Lin and C.P. Jou. Controlling chaos by ga-based reinforcement learning neural network. IEEE T. Neural Networ., 10(4):846–859, 1999.
  • [19] S. Gadaleta and G. Dangelmayr. Optimal chaos control through reinforcement learning. Chaos, 9(3):775–788, 1999.
  • [20] P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica D, 9:189–208, 1983.
  • [21] F. Takens, D.A. Rand, and L.S. Young. Dynamical systems and turbulence. Lect. Notes Math., 898(9):366, 1981.
  • [22] J.L. Carter and M.N. Wegman. Universal classes of hash functions. In Proc. 9th Ann. ACM Theor. Comp., pages 106–112. ACM, 1977.
  • [23] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Proc. 47th Ann. IEEE Found. Comp. Sci., pages 459–468. IEEE, 2006.
  • [24] W. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemp. Math., 26:189–206, 1984.
  • [25] E.A. Novikov. Two-particle description of turbulence, markov property, and intermittency. Phys. Fluids, 1(2):326–330, 1989.
  • [26] C. Renner, J. Peinke, and R. Friedrich. Experimental indications for markov properties of small-scale turbulence. J. Fluid Mech., 433:383–409, 2001.
  • [27] R. Bellman. On the theory of dynamic programming. P. Natl. Acad Sci. USA, 38(8):716, 1952.
  • [28] W. Powell.

    Approximate Dynamic Programming: Solving the curses of dimensionality

    , volume 703.
    John Wiley & Sons, 2007.
  • [29] F. Lewis and D. Vrabie. Reinforcement learning and adaptive dynamic programming for feedback control. Circuits Syst. Mag., IEEE, 9(3):32–50, 2009.
  • [30] E.N. Lorenz. Deterministic nonperiodic flow. J. Atmos. Sci., 20(2):130–141, 1963.
  • [31] O.P. Le Maître, R.H. Scanlan, and O.M. Knio. Estimation of the flutter derivatives of an NACA airfoil by means of Navier–Stokes simulation. J. Fluids Struct., 17(1):1–28, 2003.
  • [32] F. Lusseyran, L.R. Pastur, and C. Letellier. Dynamical analysis of an intermittency in an open cavity flow. Phys. Fluids, 20(11):114101, 2008.
  • [33] A.A. Gorodetsky, S. Karaman, and Y.M. Marzouk. Efficient high-dimensional stochastic optimal motion control using tensor-train decomposition. In Robotics: Science and Systems XI, Sapienza University of Rome, Italy, July 13-17, 2015.