1 Introduction
While the design and capability of aircraft, and more generally of complex systems, have significantly improved over the years, closedloop control can bring further improvement in terms of performance and robustness to nonmodeled perturbations. In the context of flow control, closedloop control however suffers from severe limitations preventing its use in many situations. As a paradigmatic example, a typical turbulent flow involves both a large range of spatial scales and exhibits a rich and fast dynamics. High frequency phenomena hence require a control command fast enough to adjust to the current state of the quickly evolving flow system. Indeed, frequencies of interest can routinely lie over 1 kHz, leading to a very short period of time for the controller to synthesize the command based on its knowledge of the state of the system.
While flow manipulation and openloop control are common practice, much fewer successful closedloop control efforts are reported in the literature. Further, many of them rely on unrealistic assumptions. For example, Model Predictive Control (MPC) approaches require very significant computational resources to solve the governing equations in realtime. If a ReducedOrder Model (ROM) is employed, as is common practice to alleviate the CPU burden, one often needs to observe the whole system for deriving the ROM as, for instance, the velocity or pressure fields with Proper Orthogonal Decomposition (POD), see Gerhard2003 ; Bergmann2008 ; Ma2011 ; Joe2011 ; Mathelin2012 ; Cordier2013 . Hence, flow control with this class of approaches is restricted to numerical simulations or experiments in a wind tunnel equipped with sophisticated visualization tools such as Particle Image Velocimetry (PIV).
This paper discusses a practical strategy for closedloop control of complex flows by alleviating the limitations of current methods. The present work relies on a change of paradigm: we want to derive a general nonlinear closedloop flow control methodology suitable for actual configurations and as realistic as possible. No a priori model, nor even a model structure, describing the dynamics of the system is required to be available. The approach proposed is datadriven only, with the sole information about the system given by scarce and spatiallyconstrained sensors. The method then exploits statistical learning methods.
This is the framework one typically deals with in practical situations where the amount of information on the system at hand is limited and usually comes from a few sensors located at the boundary of the fluid domain, e.g.
, on solid surfaces. The resulting information takes the form of short timedependent vectors with as many entries as sensors.
Among the few earlier efforts relying on streaming measurements from a few sensors, a trained neural network using surface measurements is employed to reduce the drag of a turbulent channel flow with an opposition control strategy in
Lee1997 . In Kegerise2007 , pressure sensors and an autoregressive model (AutoRegression with eXogenous inputs, ARX) are used to reduce flowinduced cavity tones. An autoregressive approach is also followed in Huang_Kim_2008 and Herve2012, while a genetic programming technique is adopted in
Gautier2015 to control a separated boundary layer. Interested readers may refer to Brunton2015 for a comprehensive review of the topic. The present approach aims at deriving an efficient, yet robust, nonlinear closedloop control method compliant with actual situations. Among its distinctive features compared to other methods is a combination of both performance and fast learning.To facilitate learning about the system dynamics from the timedependent measurements, and the subsequent derivation of a control strategy, the problem must be amenable to a finite dimension. Hence, one needs to discretize the infinitedimensional timeseries of the sensors’ information. To this end, the streaming data are convolved with a kernel which should result in a discrete image space. LocalitySensitive Hash (LSH) functions are used for that purpose, Slaney2008 , which results in a lowdimensional discrete state space. Transitions from state to state in this discrete space describe the dynamics, Kaiser2014 , and allow the analysis to learn, and update in realtime, a Markov process model of the system. A suitable discretization of the dynamics allows the derivation of a reinforcement learningbased control strategy of the identified Markov process model of the system. The control of Markov processes is a mature field, Mandl1974 and reinforcement learning, Watkins1992 ; Gosavi2011 , is a suitable class of methods for the control of Markov processes, see for instance Lin1999 ; Gadaleta1999 for the control of 1D and 2D chaotic maps. As will be seen in the application examples below, the resulting control strategy is datadriven only, intrinsically robust against perturbations in the flow and does not require significant computational resources nor prior knowledge of the flow. The proposed approach is experimentoriented and ongoing efforts are carriedout to demonstrate it on an experimental open cavity flow in a turbulent regime. This will be the subject of a subsequent publication.
The paper is organized as follows. The framework and basics of how hash functions are used to generate a lowdimensional state space are discussed in Section 2. Section 3 is concerned with learning an efficient control strategy for the system modeled as a stochastic process living in a small dimensional space. The resulting control strategy is illustrated and discussed in the case of the control of a Lorenz 63 system and the drag reduction of a cylinder in a twodimensional flow in Section 4. Concluding remarks close the paper in Section 5.
2 Hash functions for reduced order modeling
2.1 Preliminaries
Consider a dynamical system evolving on a manifold :
with the state of the system and the flow operator. Let be a sensor function. In the sequel, the number of sensors will be taken to be one but generalization to more sensors is immediate. The observed data is defined as .
Let be the sampling rate of the measurement system. Sampling has to be fast enough to capture the small time scales of the dynamics of . The data coming from the sensor are embedded in a reconstructed phase space :
The correlation dimension of
is estimated from the timeseries, for instance using the GrassbergerProccacia algorithm,
Grassberger1983 . It allows the definition of the embedding dimension as, at least, twice the correlation dimension. Under mild assumptions, this resulting embedding dimension ensures there is a diffeomorphism between the phase space and the reconstructed phase space, Takens1981 , so that is an observable on the system.2.2 Hash functions
A hash function is any function that can be used to map an entry to a key . Since the key is an integer, hash functions effectively result in a discrete image space of . Hash functions are often used to efficiently discriminate two different entries so that slightly different input data should result in a large variation of the associated key, Carter1977 . An important objective in choosing the hash function is to avoid collisions, i.e., when two different entries are associated with the same key.
Conversely, the need for identification of similar entries in large databases has led to the use of the LocalitySensitive Hash functions (LSH) Andoni2006 ; Slaney2008 . In contrast with most hash functions, they are designed to promote collisions when two entries are close to each other. The idea is that, if two points are close in , they should be likely to remain close after a projection on a lower dimensional subspace. The JohnsonLindenstrauss lemma (JLL), Johnson1984 , provides useful results to reach this objective and motivates the use of LSHs. Specifically, the JLL provides probabilistic guarantees of the near preservation of relative distances between objects in highdimension after projection onto random lowdimensional spaces.
Let , , be a test vector. The function is a LSH, Andoni2006 :
(1) 
where is the floor operator, a quantization length and is such that , , , with the unit ball of in the sense of . As an illustration, following Johnson1984 ; Slaney2008 , if two observables and are such that , with
, they are associated with the same key with a probability
larger than . On the other hand, the probability that two distant points and appear close to each other in the sense of is a function of the angle between and and is smaller than .Let us illustrate the LSH with a simple example. Let and and be three vectors from such that:
Let be a unitnorm normal Gaussian vector of , and . The different vectors are drawn in Fig. 1. Upon processing with the LSH, the keys associated with the three vectors and are:
and are closer to each other, in the sense of their distance, than to , and indeed, vectors and are associated with the same key while is associated with .
To discriminate false neighbors with higher probability, projections on several lowdimensional subspaces can be used. Consider the hash function made of concatenation of keys . Many choices can be made for the test vectors . For instance, they could be the principal axes of the manifold on which the observable
evolves, or may be randomly selected from a Gaussian distribution. Keys (
i.e., objects in the image space of ) generate a Voronoï paving of the observable space. Each key is associated with a cell, or state , and close observables are associated with the same key.Coarseness of the paving depends on the quantization lengths and the number of test vectors. The set quantifies the minimal length of the cells^{1}^{1}1Two vectors and are associated with two different keys if their norms differ by more than , hence the minimal length of the cells. and, as increases, the cardinality of the image space of decreases. On the other hand, increasing is equivalent to refining the description, i.e., increasing the cardinality of .
3 Statedriven control
Thanks to the hash function, the original infinitedimensional system is approximated as a discrete stochastic process whose state space is spanned by the keys. The system is observable in realtime in this discrete space since evaluating the hash function with the streaming sensor data can be done at no computational cost. Under a discrete control command to the actuators, hereafter termed the action, the dynamics of the system will be modified and the goal is to find the action which makes the physical system at hand satisfy the control objective, say, a target dynamics. The discretized description of the system with the hash function naturally lends itself to a Markovian framework, Novikov1989 ; Renner2001 , which is adopted below.
3.1 Markov Decision Process
Let be the probability of transition from a state to under an action , being a uniformly discretized space of possible control actions. Here again, the analysis is restricted to one actuator but generalization to more actuators is immediate. Actions index the discrete commands available to the controller. Define and . Similarly, . To each transitionaction is associated a transition reward (TR) ^{2}^{2}2We use a slight abuse of notation as we consider and . Both and are
threeway tensors.
. The goal of the control strategy is hence to identify the optimal policy , which describes the best action to apply when in a given state so as to maximize the value, , defined as the sum of the future expected rewards of the policy when starting at state :(2) 
where is the expectation operator over all possible sequences under the policy . Here, is the discount rate, . By weakly accounting for TRs occurring far in the future, the discounted rate effectively introduces a time horizon. The values express the expectation of the cumulative TRs of a given policy. Eq. (2) may be reformulated as, Mandl1974 :
(3) 
where is the mean TR in state under the policy . This corresponds to the celebrated Bellman equation, Bellman1952 , written in the discrete settings.
3.2 Identification of the rewards
The transition rewards are unknown a priori and depend on the control objective. As perhaps more familiar to the reader, one could define the cost as the opposite of the reward. Consequently, to learn relevant rewards, consider the cost function associated with the control objective:
(4) 
with the measure of performance, e.g., the drag coefficient in the included example. The contribution of the control intensity to the cost with respect to the measure of performance is weighted by .
Let the immediate rewards represent, at any given time , the negative of the contribution to the cost of the transition from the present state to state , under an action :
(5) 
is then high when the performance associated with the controlled system is good, and low otherwise. Instead of the reward associated with a particular trajectory in the original, infinitedimensional, phase space, the average, trajectoryindependent, reward associated with all trajectories leading to a given transition should be determined. The ergodicity assumption postulates the equivalence of a temporal and an ensemble average, i.e., here
. Under this assumption, the mean transition rewards, in the sense of the probability distribution of
, are finally determined during the learning stage via:(6) 
with the learning rate. For the TRs to be associated with the average contribution to the cost function of the transitionaction , the learning rate is set to , where corresponds to the number of times the transitionaction has occurred so far during the learning process.
3.3 Reinforcement learning
To derive a control strategy, one needs to determine a policy which would give the best control action given the current state of the system, as known through the hash function. The rewards associated with an action when in a given state have been learned and this information is now used to derive a control policy to drive the system along transitions and actions associated with the largest rewards.
When the probabilities of transition from a state to another are known, the policy may be identified by means of a dynamic programming algorithm, Bellman1952 ; Mandl1974 . However, the distribution of transition probabilities and values are difficult to reliably evaluate since neither the control policy nor the transition probabilities are stationary during the learning stage. In this situation, Reinforcement Learning is a suitable class of methods for the control of Markov processes, Watkins1992 ; Powell2007 ; Lewis2009 .
Among these methods, the Qlearning approach consists in relying on the estimation of the Qfactors, or actionvalues, which evaluate the expected reward of a stateaction combination:
(7) 
where is the empirical mean TR associated with applying the action when in the state . From Eq. (3), the actionvalue is hence the expected average reward of applying while in state and subsequently following the policy .
As stated previously, the transition probabilities can not be accurately estimated. However, an iterative estimation of the Qfactors can be derived, Watkins1992 ; Lewis2009 ; Gosavi2011 . Letting the initial Qfactors be given, the Qfactor associated with a state and an action can be updated at time as follows:
(8)  
where is a learning factor. It can be shown that converges to the true Qfactors when if the following conditions hold, Watkins1992 : i) the TRs are bounded, ii) , , iii) and iv) .
The actionvalue will increase when the reward associated with the 2tuple is good, i.e., such that , and decrease otherwise. To learn a good policy, the system in different states is stimulated with different actions to estimate the Qfactors. The control policy is the action which, for each state, is associated with the largest Qfactor, Watkins1992 ; Lewis2009 .
3.4 Robustness
A critical property of any realistic and practically useful control strategy is its resilience with respect to unpredictable events. These events encompass exogenous/external perturbations to the flow, sensor noise, actuator noise, etc. A control strategy robust to these perturbations is passive and brings the flow back to its nominal controlled state after the perturbation is gone. As will be demonstrated in the application example below, see section 4.2.3, the present strategy is robust thanks to two properties:
First, the state of the system, as estimated via the LSH, is discrete. The locality sensitivity property of LSHs implies that a small perturbation of the measurement vector will likely result in the same key as the unperturbed measurement, . More precisely, the state estimation is strictly robust to any perturbation of energy with probability .
Second, the proposed method is also robust against perturbations of the flow dynamics. The learning of the control strategy relies on an ensemble average of rewards and Qfactors and then results in an optimal control policy in the ensemble meansense. Hence robustness against noise.
4 Results
4.1 Dynamical system
To illustrate the methodology discussed above, we now consider the Lorenz 63 system Lorenz1963 , defined by the following equations:
(9) 
with the common parameters . A chaotic attractor defines the dynamics, structured as two “wings” around two fixed points of the system. The state vector evolves on a wing, turning around a fixed point, before eventually jumping to the other wing. is the action on the component of . The chosen control objective is to remain on the “left” wing, . The measure of the performance is given as the distance between the state vector and the left fixed point of the system, , with .
The observable is constructed from the time series of , sampled every time units. In this illustration, only is different from zero so that the Lorenz system is controlled only through the timederivative of the first component of its state vector. It mimics a realistic scenario, where sensors and actuators are distinct. Actuation values lie between with a discretization step of . This leads to different actuations to control the system. The cost function to minimize is given by Eq. (4) and the control command is penalized with .
The embedding dimension is set to . The first three singular vectors of a Hankel matrix, built on measures , are used as the test vectors of the LSHs. The quantization length is set to and leads to a cardinality of the set of keys of .
Once the test vectors are determined from the Hankel matrix, the states are to be identified via the LSHs, see Fig. 2. It is hence possible to infer both the dynamics of the system from the state transitions and the rewards associated with the objective. The mean transition probabilities from one state to another, after five time increments in order to improve visualization, are plotted in Fig. 2. As expected for the Lorenz system, the dynamics are seen to be driven by two main cycles, see Fig. 4. There are two statistically dominant sequences of transitions cycling around all states between 1 (resp. 5) and 4 (resp. 8), corresponding to the right (resp. left) wing of the Lorenz attractor. States 9 to 11 and 12 to 14 represent subtransitions from a main cluster to another one.
From the observations of the system, one learns the transition rewards matrix and the Qfactors matrix by repeatedly applying Eq. (6) and Eq. (3.3). The transition matrix is sparse, see Fig. 2, and many transitions hence hardly ever occur. Thus, the learned TRs and the Qfactors matrices are also sparse, see Fig. 2 and Fig. 2. The difference between one wing and the other is clearly discernible in the rewards, the first block being associated with negative rewards while the second block is associated with positive rewards. By construction of the rewards, the use of a strong (expensive) command is also discouraged, as can be seen in the Qfactors, Fig. 2. The identified control strategy succeeds in staying on the left wing, see Fig. 3.
4.2 Twodimensional numerical flow
To further illustrate the methodology discussed above, consider a 2D laminar flow around a circular cylinder, in two situations: a fixed, or random in time angle of the incoming flow.
4.2.1 Configuration of the test case
The considered Reynolds number of the flow is based on the cylinder diameter and the upstream flow velocity. Details of the simulation can be found in Lemaitre2003 . The observable is constructed from the time series of a single pressure sensor, sampled every time units. This sensor is located on the cylinder surface at an angle of degrees from the upstream stagnation point when the angle is fixed. In this case, the pressure signal oscillates with a period of about 9 time units and half as much for the drag.
Actuation, i.e., control of the flow is achieved via blowing or suction through the whole cylinder surface. Actuation values lie within , with a discretization step of , ranging from suction to blowing. It leads to different actuations to control the system. The cost function to minimize is given in Eq. (4), with . It corresponds to the drag ( is the drag coefficient) induced by the cylinder penalized with the intensity of the command. The penalty was chosen so that the resulting command remains within the operating range of the actuator. The embedding dimension is set to . The first five singular vectors of a Hankel matrix, built on measures , are used as the test vectors of the LSHs. The quantization length is set to and leads to a cardinality of the set of keys of .
Two situations are considered to illustrate the proposed control algorithm. First, the incident angle of the incoming flow is kept constant at its zero nominal value, Sec. 4.2.2. Alternatively, in Sec. 4.2.3, the angle is a random process, smooth in time, whose realizations range from 20 to 20 degrees around the nominal value with a uniform probability distribution, see Fig. 5.
4.2.2 Noiseless case: nominal incidence
Once the test vectors are determined from the Hankel matrix, and the states are identified via , it is possible to infer the dynamics of the system and the associated rewards. The learned transition reward is plotted in Fig. 5. It clearly exhibits a maximum which corresponds to the best compromise between a sufficient decrease of while a reasonable increase of . The estimated transition probabilities from one state to another are plotted in Fig. 6. In the present case, the dynamics are seen to be rather periodic, with a statistically dominant sequence of transitions cycling around all states between 1 and 8, see Fig. 7. Other states are “transient” states between two stages of the main cycle. For instance, a transition from state to state can occur with a low probability but the next transition will be to state (or, less likely, state ). Further analysis of the transition matrix can give more insights into the dynamics and on the relevance of other sequences, e.g., via stability analysis, Kaiser2014 , symbolic dynamics based on keys, Lusseyran2008 , or the KullbackLeibler entropy, Kaiser2014 .
From the observations of the system, one learns the transition rewards matrix and the Qfactors matrix by repeatedly applying Eq. (6) and Eq. (3.3). As in the Lorenz system case (see above in Sec. 4.1), the transition matrix is sparse, as can be appreciated from Fig. 6, and many transitions hence hardly ever occur. Thus, the learned TRs and the Qfactors matrices are also sparse, see Fig. 6 and Fig. 6.
The performance of the control strategy is assessed in terms of the performance indicator defined as the difference between the timeaveraged cost and the timeaveraged cost with a NULL strategy (), . The performance indicator is scaled with the difference between and the cost of an optimal timeinvariant control strategy:
(10) 
The evolution of as a function of the learning effort, defined as the number of measurements during the learning phase, is plotted in Fig. 8. The performance indicator increases with the amount of learning for computing the TRs, but quickly reaches a plateau. The TR matrix being sparse, the information of the learning stage focuses onto a limited number of unknowns and the Qfactors quickly converge.
As expected, the values , computed with the learned policies are larger than the values computed with the ORA policy, on average, see Tab. 1.
Policy  NULL  ORA  Present strategy 

Average value  15.8  9.5  13.2 
The resulting control command is plotted in Fig. 8 and is seen to exhibit an oscillatory behavior. The associated drag coefficient of the cylinder flow is plotted in Fig. 8 for the present approach as well the NULL and the ORA strategy. The identified control is seen to perform well. The drag coefficient for the identified control resembles the one given by the ORA control.
To illustrate the impact of the control on the system, the timeaveraged pressure field around the cylinder is plotted in Fig. 9, both with and without control. When control is applied, the pressure difference between the upstream stagnation point and the rear cylinder vicinity is significantly reduced. Further insights about the control effect can be gained by examining Fig. 10 where the timeaveraged streamlines are plotted. With control, i.e., with a negative normal velocity at the cylinder surface, small recirculation bubbles significantly weaken, or even vanish, and the separation of the boundary layer from the cylinder surface is postponed further downstream, reducing the effective width of the wake. The length of the recirculation bubble drops from in the case of the NULL command (no control), see Fig. 9, to when the command identified by the present control approach is applied, see Fig. 9. The length of the recirculation bubble is hence reduced by . Suction at the cylinder surface tends to slightly increase the viscous drag (thinner boundary layers with larger velocity gradients) but significantly decreases the width of the wake and the pressure defect at the back of the cylinder, producing a lower drag.
4.2.3 Measurements with noise
To investigate the robustness of our control strategy, the angle of the incident flow is made to vary randomly between and degrees around its nominal value with a uniform probability distribution and a smooth timeevolution, see Fig. 5 for a typical realization. This mimics a typical class of perturbations to the system at hand and allows the determination of the robustness of the control strategy.
As in the noiseless case considered in Sec. 4.2.2 above, the transition probability, rewards and Qfactors matrices are all sparse (not shown for sake of brevity). The performance indicator of the derived policy is depicted in Fig. 11. In contrast with the noiseless case, is here strongly dependent on the effort in learning. At early stages of the learning process, the influence of the noise is prominent and the dynamics are not properly captured, which ultimately leads to poor control strategies.
Similarly as in Sec. 4.2.2, the values , computed from the identified strategy, are larger, on average, than the values from the ORA policy, see Tab. 2. This result indicates that the present approach is able to uncover an efficient control strategy, here achieving a significantly lower cost than an optimal timeinvariant control strategy, even in a noisy environment.
Policy  NULL  ORA  Present strategy 

Average value  6.8  3.0  8.0 
The resulting control command is plotted in Fig. 11 and is seen to exhibit an oscillatory behavior. The associated drag coefficient of the cylinder flow is plotted in Fig. 11 for the present approach as well the NULL and the ORA strategy. Again, the identified control is seen to perform well and resembles that given by ORA.
5 Concluding remarks
This work has presented an experimentoriented control strategy which does not require any prior knowledge of the physical system to be controlled nor significant computational resources. This strategy allows the learning of a control policy from scarce and point sensors with very limited information on the system at hand. From the sensors’ streaming data, a phase space is built using hash functions as a kernel the measurements are convoluted with in realtime. This allows the derivation of a discrete, lowdimensional, space in which the dynamics of the system are approximated. Ensembleaveraged rewards associated with transitions from one discrete state to another are estimated during an online learning sequence. They are directly related to the control objective and tend to sort state transitions based on their impact on the control cost function. A reinforcement learning algorithm is then used to derive the optimal control policy, promoting transitions associated with good rewards.
The resulting method is compliant with actual configurations where instrumentation is limited and spatiallyconstrained. Owing to the use of kernel hash functions and effective stateaggregation in a discrete phase space, the method runs in realtime and allows closedloop control. Its discrete and ensembleaveraged nature also brings intrinsic robustness against perturbations in the flow.
This approach has been illustrated on two test cases. The control of a Lorenz system is achieved, by measuring only one component. To mimic a realistic scenario, the actuation is done on a different component. The method is also illustrated by the twodimensional flow around a circular cylinder. Measurements were provided by a single wallmounted pressure sensor and actuation was achieved by blowing or suction of fluid at the cylinder surface. The drag coefficient was significantly reduced, reaching essentially the same performance as the control policy given by the ORA strategy.
More generally, the method presented in this work is readily applicable to physical systems with a causal link between the actuators and the cost functional as evaluated from the sensors. It does not rely on a prior model and instead learns directly from observing the system under stimulation by the actuators, hence being suitable for practical configurations. An identified limitation of the proposed approach is the number of keys one needs to consider if the relevant dynamics of the system is very rich (large ) and/or a very fine control law is required (large ). In this situation, the number of entries of the matrix grows and hence possibly requires more data for learning. Approximation techniques have however been used to alleviate this limitation Alex_etal_MIT .
Current efforts concern the experimental control of the turbulent flow over an open cavity using the present approach and will be the subject of a subsequent publication. Further developments focus on the convolution kernels, an improved evaluation of suitable rewards and milder assumptions for the reinforcement learning of the control policy.
References
 [1] J. Gerhard, M. Pastoor, R. King, B.R. Noack, A. Dillmann, M. Morzynski, and G. Tadmor. Modelbased control of vortex shedding using lowdimensional galerkin models. AIAA J., 4262(2003):115–173, 2003.
 [2] M. Bergmann and L. Cordier. Optimal control of the cylinder wake in the laminar regime by trustregion methods and pod reducedorder models. J. Comp. Phys., 227(16):7813–7840, 2008.
 [3] Z. Ma, S. Ahuja, and C.W. Rowley. Reducedorder models for control of fluids using the eigensystem realization algorithm. Theo. Comp. Fluid Dyn., 25(14):233–247, 2011.
 [4] W.T. Joe, T. Colonius, and D.G. MacMynowski. Feedback control of vortex shedding from an inclined flat plate. Theo. Comp. Fluid Dyn., 25(14):221–232, 2011.
 [5] L. Mathelin, L. Pastur, and O. Le Maître. A compressedsensing approach for closedloop optimal control of nonlinear systems. Theo. Comp. Fluid Dyn., 26(14):319–337, 2012.
 [6] L. Cordier, B.R. Noack, G. Tissot, G. Lehnasch, J. Delville, M. Balajewicz, G. Daviller, and R.K. Niven. Identification strategies for modelbased control. Exp. Fluids, 54(8):1–21, 2013.
 [7] C. Lee, J. Kim, D. Babcock, and R. Goodman. Application of neural networks to turbulence control for drag reduction. Phys. Fluids, 9(6):1740–1747, 1997.
 [8] M.A. Kegerise, R.H. Cambell, and L.N. Cattafesta. Real time feedback control of flowinduced cavity tones  part 2: Adaptive control. J. Sound Vib., 307:924–940, 2007.
 [9] S.C. Huang and J. Kim. Control and system identification of a separated flow. Phys. Fluids, 20(10):101509, 2008.
 [10] A. Hervé, D. Sipp, P.J. Schmid, and M Samuelides. A physicsbased approach to flow control using system identification. J. Fluid Mech., 702:26–58, 2012.
 [11] N. Gautier, J.L. Aider, T. Duriez, B.R. Noack, M. Segond, and M.W. Abel. Closedloop separation control using machine learning. J. Fluid Mech., 770:442–457, 2015.
 [12] S. Brunton and B. Noack. Closedloop turbulence control: Progress and challenges. App. Mech. Rev., 67(5):050801, 2015.
 [13] M. Slaney and M. Casey. Localitysensitive hashing for finding nearest neighbors [lecture notes]. IEEE Signal Process. Mag., 25(2):128–131, 2008.
 [14] E. Kaiser, B.R. Noack, L. Cordier, A. Spohn, M. Segond, M. Abel, G. Daviller, J. Östh, S. Krajnović, and R.K. Niven. Clusterbased reducedorder modelling of a mixing layer. J. Fluid Mech., 754:365–414, 9 2014.

[15]
P. Mandl.
Estimation and control in markov chains.
Adv. App. Probab., pages 40–60, 1974.  [16] C. Watkins and P. Dayan. Qlearning. Mach. Learn., 8(34):279–292, 1992.
 [17] A. Gosavi. Targetsensitive control of markov and semimarkov processes. Int. J. Control Autom., 9(5):941–951, 2011.
 [18] C.T. Lin and C.P. Jou. Controlling chaos by gabased reinforcement learning neural network. IEEE T. Neural Networ., 10(4):846–859, 1999.
 [19] S. Gadaleta and G. Dangelmayr. Optimal chaos control through reinforcement learning. Chaos, 9(3):775–788, 1999.
 [20] P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica D, 9:189–208, 1983.
 [21] F. Takens, D.A. Rand, and L.S. Young. Dynamical systems and turbulence. Lect. Notes Math., 898(9):366, 1981.
 [22] J.L. Carter and M.N. Wegman. Universal classes of hash functions. In Proc. 9th Ann. ACM Theor. Comp., pages 106–112. ACM, 1977.
 [23] A. Andoni and P. Indyk. Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions. In Proc. 47th Ann. IEEE Found. Comp. Sci., pages 459–468. IEEE, 2006.
 [24] W. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemp. Math., 26:189–206, 1984.
 [25] E.A. Novikov. Twoparticle description of turbulence, markov property, and intermittency. Phys. Fluids, 1(2):326–330, 1989.
 [26] C. Renner, J. Peinke, and R. Friedrich. Experimental indications for markov properties of smallscale turbulence. J. Fluid Mech., 433:383–409, 2001.
 [27] R. Bellman. On the theory of dynamic programming. P. Natl. Acad Sci. USA, 38(8):716, 1952.

[28]
W. Powell.
Approximate Dynamic Programming: Solving the curses of dimensionality
, volume 703. John Wiley & Sons, 2007.  [29] F. Lewis and D. Vrabie. Reinforcement learning and adaptive dynamic programming for feedback control. Circuits Syst. Mag., IEEE, 9(3):32–50, 2009.
 [30] E.N. Lorenz. Deterministic nonperiodic flow. J. Atmos. Sci., 20(2):130–141, 1963.
 [31] O.P. Le Maître, R.H. Scanlan, and O.M. Knio. Estimation of the flutter derivatives of an NACA airfoil by means of Navier–Stokes simulation. J. Fluids Struct., 17(1):1–28, 2003.
 [32] F. Lusseyran, L.R. Pastur, and C. Letellier. Dynamical analysis of an intermittency in an open cavity flow. Phys. Fluids, 20(11):114101, 2008.
 [33] A.A. Gorodetsky, S. Karaman, and Y.M. Marzouk. Efficient highdimensional stochastic optimal motion control using tensortrain decomposition. In Robotics: Science and Systems XI, Sapienza University of Rome, Italy, July 1317, 2015.
Comments
There are no comments yet.