A stochastic game theory approach for the prediction of interfacial parameters in two-phase flow systems

08/06/2019 ∙ by Zhuoran Dang, et al. ∙ Purdue University 0

The prediction of interfacial area properties in two-phase flow systems is difficult and challenging. In this paper, a conceptual idea of using single-agent reinforcement learning for the behaviors of two-phase flows and IAC behaviors is proposed. The basic assumption for this application is that the development of two-phase flow is considered to be a stochastic process with Markov property. The details of the design of simple Markov games are described and approaches of gaming solutions are adapted. The experiment shows that both of the steam fraction and IAC prediction processes converge. The model predictions are compared with the experimental results, and the tendency matches although some oscillations exist. The performances and prediction results can be improved by elaborating the game environment setup.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

Stochastic-Game-Env

Paper: https://arxiv.org/abs/1908.02750


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The prediction of the characteristics of the two-phase flow is essential in terms of the safety of two-phase flow systems such as the reactor pressure vessel in the nuclear power plant. Nowadays, many software and codes have been developed based on two-phase flow fundamental theories and models. Take the TRACE code [11] as an example, which is considered as one of the most elaborated code up to now. The TRACE code was developed based on the Two-fluid model [5] and interfacial area transport Equations (IATE) [7] and is capable of predicting the characteristics of the boiling two-phase flows [2]

. The two-fluid model uses two groups of partial differential equations and a series of constitutive equations to describe the two phases. The IATE is developed based on the idea of Boltzmann transport equations and is capable of dynamically predicting the transition of the two-phase flows. Both of the two models are considered to be the most accurate model up to now. However, the models are complicated to solve since there are many non-linear PDEs included. During the years, more correlations and models have been developed and further elaborate the models, making the models even more complex to compute.

Nowadays, the ability of computation has been significantly increased and the cost of compilations is going down, some problems of thermal-dynamics and two-phase flows were solved in a model-based free, machine learning approaches. The advantage of these approaches is that they are quite easy to be established, as long as the problems meet the fundamental prerequisites. For example, studies were performed on the two-phase flow regime classifications using self-organized map (SOM)

[8]

, supported vector machine (SVM)

[13]

, and neural network (ANN). They used different machine learning techniques, however, the fundamental is to determine the key parameters that can describe the flow characteristics. These parameters are used to classify the flow regimes. Besides, there are also model-based machine learning approaches that can both be accurate and stable since they are theory based and easy to solve complex problem. There are examples that can be easily searched, thus, there is no further discussion here.

In this paper, the author proposes a new solution concept of using stochastic game theory approach for the prediction of interfacial parameters. This approach is considered to be eligible since the changes and transitions of the two-phase flows meet the Markov property, that is: the next state of the two-phase flow is only related/determined by the current state. This paper describes the simple, basic stochastic game design and a Q-learning test.

Ii Reinforcement learning and Stochastic games

Reinforcement learning is about learning from interaction how to behave in order to achieve a goal. [10]

The reinforcement learning problem/setup constitutes a Markov decision process (MDP). A MDP is a discrete, stochastic control process that provides mathematical frameworks for decision making.

[1] The key element of a MDP can be represented as a tuple,

. S, A, P, R are the state, action, probability transition function, rewards of the n-th player, respectively. The player interacts with the environment in terms of the state by taking actions and getting rewards. A policy, which is a stochastic rule by which the player selects actions as a function of states, is formed through the this process. The objective is to maximize the amount of reward it receives over time.

[10] Stochastic game is a generalized concept that combines repeated games and Markov decision processes (MDP). [4] A MDP is a one-player stochastic game. [4] Detailed information about the theory of reinforcement learning and stochastic game are not discussed in this paper.

This paper aims to propose a one-player stochastic game environment design in which the player’s goal is to predict the integral quantity of the interfacial parameters, i.e., total steam fraction and total interfacial area concentration. However, multi-player game can also be setup using this method. Two approaches can be considered in designing a multi-player game: 1) each player represents one group of bubbles (group 1 and 2 bubbles defined by [9]) 2) each player represents one phase, either gas/steam or water.

Iii Two-phase parameters prediction game

Iii-a Game setup and models

The game design is inspired from the two-phase flow experiment setup provided by Zivi [14]. Consider a steady-state, steam-water two-phase flow in a finite length annulus duct. The inner part of the annulus duct is a heater rod that provides constant heat flux, and the outer part of the annulus duct is adiabatic. Suppose that the water entering the duct is at thermal saturation state. Due to the heat addition, steam can emerge at certain location, and can develop along the duct with the change of the quality of the two-phase mixture , where is the location along the duct. The flows enter and exit the annulus duct with very low velocities so that the effects of kinetic energy dissipation and frictional pressure can be negligible.

Based on this game design, the estimation of steam fraction is calculated using correlation provided by Zivi

[14],

(1)

where , , and are the steam quality, steam density and water density, respectively. denotes the fraction of the water in form of droplets entrained in the steam. This parameter can be used to quantify the flow regime, where D = 0 is pure annular flow and D = 1 can be considered as bubbly flow. From Eq. (1), the local average void fraction is estimated with above four parameters. And also the void fraction depends on the axial locations. Thus, the state of steam fraction is simplified in a following structure,

(2)

The calculation of interfacial area concentration (IAC) utilizes the correlation developed by Kocamustafaogullari et al. [6],

(3)

where , , , , , , and are local averaged void fraction, area-averaged superficial water velocity, area-averaged superficial air velocity, pressure drop rate along the duct, water density, surface tension, and hydraulic diameter, respectively. In this case, can be referred from the estimation using Eq. (1) and the values of equal to those in Eq. (2). and are nearly fixed values so these two parameters are not included in the IAC states. Thus, the state expression of IAC is simplified as follows,

(4)

The game is a finite, episodic task game since the two-phase flows travels in a finite length. In each step, the action can change one parameter in the state by choosing one of the three options: to a larger value, to a smaller value, or staying the same. The rewards at each step is the difference between calculated value using the state and the true value,

(5)
(6)

where C is a positive constant. There is a trick used in the setup of rewards. The initial rewards for each state-action pair is set as 0. While if the rewards are without +1 (i.e. ), the rewards are always negative. If a state-action pair has been visited, it would be likely to be updated to a negative value (e.g. in Q-learning algorithm). In this case, if a state is being visited for another time, the agent would always prefer the action that has not been updated yet. This is a false game design and the result can be non-convergent in most cases. The Constant C in the IAC rewards has the same purpose as +1 in the steam fraction rewards that compensates the rewards and avoids the scenario that the visited rewards are negative. It should be noted that the constant in the IAC rewards can affect the performance and the prediction result by affecting the choices of action. It should be set and tuned properly during the game setup.

The agent explores the environment and establishes policies on action selections to change the states. The policies are a series of stochastic rules by which the agent selects the actions. An agent’s goal in the game is to predict the steam fraction or IAC by changing and optimizing the values of the states that are used to estimate the steam fraction and IAC at the next state.

Iv Experiments

In this section, The validation of the veracity and robustness of the game design is tested with Q-learning algorithm [12] (i.e., - greedy, with 0.001, 0.001, and = 1.0). The procedure of the experiment is provided [10].

.
loop for each episode
     initialize S.
     loop for each step of episode
         loop for each parameter in the state
              if  then
                  .               
              .
              Take action A, update the parameter.
              Observe R, S’.
              .
              .               
Algorithm 1 Apply Q-learning to the game

Fig. 1 and Fig. 2 show the off-policy Q-learning converges in the games for both steam fraction and IAC prediction. In both two games, each element is updated separately at each step with possibly different actions chosen. The steam fraction/IAC at the state is calculated after all the elements of the state are updated. From the two figures, the convergence speed of steam fraction prediction is faster than that of IAC becuase the are less elements in the state of steam fraction game setup.

Fig. 3 shows the steam fraction prediction result. Fig. 4 gives the changes of key parameters in the prediction. It should be noted that the ratios of change of state elements can affect the ultimate prediction results. Using a large ratio may cause a shortage of reasonable states, though it may reduce the time and space complexities. From the figures provided, the predictions show good tendencies with some oscillations. These oscillations are caused by the following two factors in the game setup: 1) the parameters for the steam fraction and IAC calculation are discrete; 2) the game setup is not elaborated enough because the models included in the game setup are not enough. From the convergences of models/policies training and the tendency matches between the prediction and the experimental results, it can be concluded that this approach works. It is expected that with more elaborated designs, the predictions can become more accurate and the application range can be extended. Fig, 5 and Fig. 6 give the IAC prediction results and the change of the key parameters change, respectively. In this experiment, IAC prediction also shows large oscillations at some positions.

Fig. 1: Q-learning Train scores on steam fraction predictions.
Fig. 2: Q-learning Train scores on IAC predictions.
Fig. 3: Steam fraction prediction results.
Fig. 4: Parameter changes in steam fraction prediction
Fig. 5: IAC prediction results.
Fig. 6: Parameter changes in IAC prediction

V Conclusion

In this paper, a conceptual idea of using single-agent reinforcement learning for the behaviors of two-phase flows and IAC prediction is proposed. The idea by developing a stochastic game using the steam fraction and IAC empirical correlations is established. In the game, the parameters in the correlations are treated as the elements of the state, and they are updated (increase, or decrease, or stay the same) according to the chosen actions in each step. The game is tested using Q-learning and the results show good matches with experimental results. This approach can be further developed by elaborating the game environment setup.

Vi Acknowledgements

The author is currently a PhD student in thermal hydraulic and reactor safety laboratory (TRSL) at Purdue University and under the supervision of Dr. Mamoru Ishii. The author would like to deeply thank his support and guidance in the theory of thermo-fluid dynamics and two-phase flow.

References

  • [1] R. Bellman (1957) A markovian decision process. Journal of mathematics and mechanics, pp. 679–684. Cited by: §II.
  • [2] M. S. Bernard (2014) Implementation of the interfacial area transport equation in trace for boiling two-phase flows. Ph.D. Thesis, Pennsylvania State University. Cited by: §I.
  • [3] J. Du, Z. Dang, Y. Zhao, and M. Ishii (Under review) Experimental study of local interfacial parameters under vibration in subcooled boiling flow. International Heat and Mass Transfer. Cited by: A stochastic game theory approach for the prediction of interfacial parameters in two-phase flow systems.
  • [4] A. Greenwald, K. Hall, and R. Serrano (2003) Correlated q-learning. In ICML, Vol. 3, pp. 242–249. Cited by: §II.
  • [5] M. Ishii and K. Mishima (1984) Two-fluid model and hydrodynamic constitutive relations. Nuclear Engineering and design 82 (2-3), pp. 107–126. Cited by: §I.
  • [6] G. Kocamustafaogullari, W. Huang, and J. Razi (1994) Measurement and modeling of average void fraction, bubble size and interfacial area. Nuclear Engineering and Design 148 (2-3), pp. 437–453. Cited by: §III-A.
  • [7] G. Kocamustafaogullari and M. Ishii (1995) Foundation of the interfacial area transport equation and its closure relations. International Journal of Heat and Mass Transfer 38 (3), pp. 481–493. Cited by: §I.
  • [8] Y. Mi, M. Ishii, and L. Tsoukalas (1998) Vertical two-phase flow identification using advanced instrumentation and neural networks. Nuclear Engineering and Design 184 (2-3), pp. 409–420. Cited by: §I.
  • [9] X. Sun (2001) Two-group interfacial area transport equation for a confined test section. Cited by: §II.
  • [10] R. S. Sutton, A. G. Barto, et al. (1998) Introduction to reinforcement learning. Vol. 2, MIT press Cambridge. Cited by: §II, §IV.
  • [11] V. TRACE (2007) Theory manual. field equations, solution methods and physical models. u. s. Cited by: §I.
  • [12] C. J. C. H. Watkins and P. Dayan (1992) Q-learning. In Machine Learning, pp. 279–292. Cited by: A stochastic game theory approach for the prediction of interfacial parameters in two-phase flow systems, §IV.
  • [13] Y. Zhou, F. Chen, and B. Sun (2008) Identification method of gas-liquid two-phase flow regime based on image multi-feature fusion and support vector machine. Chinese Journal of Chemical Engineering 16 (6), pp. 832–840. Cited by: §I.
  • [14] S. Zivi (1964) Estimation of steady-state steam void-fraction by means of the principle of minimum entropy production. Journal of heat transfer 86 (2), pp. 247–251. Cited by: §III-A, §III-A.