Particle Swarm Optimization for Generating Interpretable Fuzzy Reinforcement Learning Policies

10/19/2016 ∙ by Daniel Hein, et al. ∙ Technische Universität München 0

Fuzzy controllers are efficient and interpretable system controllers for continuous state and action spaces. To date, such controllers have been constructed manually or trained automatically either using expert-generated problem-specific cost functions or incorporating detailed knowledge about the optimal control strategy. Both requirements for automatic training processes are not found in most real-world reinforcement learning (RL) problems. In such applications, online learning is often prohibited for safety reasons because online learning requires exploration of the problem's dynamics during policy training. We introduce a fuzzy particle swarm reinforcement learning (FPSRL) approach that can construct fuzzy RL policies solely by training parameters on world models that simulate real system dynamics. These world models are created by employing an autonomous machine learning technique that uses previously generated transition samples of a real system. To the best of our knowledge, this approach is the first to relate self-organizing fuzzy controllers to model-based batch RL. Therefore, FPSRL is intended to solve problems in domains where online learning is prohibited, system dynamics are relatively easy to model from previously generated default policy transition samples, and it is expected that a relatively easily interpretable control policy exists. The efficiency of the proposed approach with problems from such domains is demonstrated using three standard RL benchmarks, i.e., mountain car, cart-pole balancing, and cart-pole swing-up. Our experimental results demonstrate high-performing, interpretable fuzzy policies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This work is motivated by typical industrial application scenarios. Complex industrial plants, like wind or gas turbines, have already been operated in the field for years. For these plants, low-level control is realized by dedicated expert-designed controllers, which guarantee safety and stability. Such low-level controllers are constructed with respect to the plant’s subsystem dependencies which can be modeled by expert knowledge and complex mathematical abstractions, such as first principle models and finite element methods. Examples for low-level controllers include self-organizing fuzzy controllers, which are considered to be efficient and interpretable (Casillas et al., 2003) system controllers in control theory for decades (Procyk and Mamdani, 1979; Scharf and Mandve, 1985; Shao, 1988).

However, we observed that high-level control is usually implemented by default control strategies, provided by best practice approaches or domain experts who are maintaining the system based on personal experience and knowledge about the system’s dynamics. One reason for the lack of autonomously generated real-world controllers is that modeling system dependencies for high-level control by dedicated mathematical representations is a complicated and often infeasible approach. Further, modeling such representations by closed-form differentiable equations, as required by classical controller design, is even more complicated. Since in many real-world applications such equations cannot be found, training high-level controllers has to be performed on reward samples from the plant. Reinforcement learning (RL) (Sutton and Barto, 1998) is capable of yielding high-level controllers based solely on available system data.

Generally, RL is concerned with optimization of a policy for a system that can be modeled as a Markov decision process. This policy maps from system states to actions in the system. Repeatedly applying an RL policy generates a trajectory in the state-action space (Section 

2). Learning such RL controllers in a way that produces interpretable high-level controllers is the scope of this paper and the proposed approach. Especially for real-world industry problems this is of high interest, since interpretable RL policies are expected to yield higher acceptance from domain experts than black-box solutions (Maes et al., 2012).

A fundamental difference between classical control theory and machine learning approaches, such as RL, lies in the way how these techniques address stability and reward function design. In classical control theory, stability is the central property of a closed-loop controller. For example, Lyapunov stability theory analyzes the stability of a solution near a point of equilibrium. It is widely used to design controllers for nonlinear systems (Lam and Zhou, 2007). Moreover, fault detection and robustness are of interest for fuzzy systems (Yang et al., 2013, 2014a, 2014b). The problems addressed by classical fuzzy control theory, i.e., stability, fault detection, and robustness, make them well suited for serving as low-level system controllers. For such controllers, reward functions specifically designed for the purpose of parameter training are essential.

In contrast, the second view on defining reward functions, which is typically applied in high-level system control, is to sample from a system’s latent underlying reward dynamic and subsequently use this data to perform machine learning. Herein, we apply this second view on defining reward functions, because RL is capable of utilizing sampled reward data for controller training. Note that the goal of RL is to find a policy that maximizes the trajectory’s expected accumulated rewards, referred to as return value, without explicitly considering stability.

Several approaches for autonomous training of fuzzy controllers have proven to produce remarkable results on a wide range of problems. Jang (1993) introduced ANFIS, a fuzzy inference system implemented using an adaptive network framework. This approach has been frequently applied to develop fuzzy controllers. For example, ANFIS has been successfully applied to the cart-pole (CP) balancing problem (Saifizul et al., 2006; Hanafy, 2011; Kharola and Gupta, 2014). During the ANFIS training process, training data must represent the desired controller behavior, which makes this process a supervised machine learning approach. However, the optimal controller trajectories are unknown in many industry applications.

Feng (2005a, b) applied particle swarm optimization (PSO) to generate fuzzy systems to balance the CP system and approximate a nonlinear function Debnath et al. optimized Gaussian membership function parameters for nonlinear problems and showed that parameter tuning is much easier with PSO than with conventional methods because knowledge about the derivative and complex mathematical equations are not required (Debnath et al., 2013). Kothandaraman and Ponnusamy (2012)

applied PSO to tune adaptive neuro fuzzy controllers for a vehicle suspension system. However, similar to ANFIS, the PSO fitness functions in all these contributions have been dedicated expert formulas or mean-square error functions that depend on correctly classified samples.

To the best of our knowledge, self-organizing fuzzy rules have never been combined with a model-based batch RL approach. In the proposed fuzzy particle swarm reinforcement learning (FPSRL) approach, different fuzzy policy parameterizations are evaluated by testing the resulting policy on a world model using a Monte Carlo method (Sutton and Barto, 1998). The combined return value of a number of action sequences is the fitness value that is maximized iteratively by the optimizer.

In batch RL, we consider applications where online learning approaches, such as classical temporal-difference learning (Sutton, 1988), are prohibited for safety reasons, since these approaches require exploration of system dynamics. In contrast, batch RL algorithms generate a policy based on existing data and deploy this policy to the system after training. In this setting, either the value function or the system dynamics is trained using historic operational data comprising a set of four-tuples of the form (observation, action, reward, next observation), which is referred to as a data batch. Research from the past two decades (Gordon, 1995; Ormoneit and Sen, 2002; Lagoudakis and Parr, 2003; Ernst et al., 2005)

suggests that such batch RL algorithms satisfy real-world system requirements, particularly when involving neural networks (NNs) modeling either the state-action value function 

(Riedmiller, 2005a, b; Schneegass et al., 2007b, a; Riedmiller et al., 2009) or system dynamics (Bakker, 2004; Schäfer, 2008; Depeweg et al., 2016). Moreover, batch RL algorithms are data-efficient (Riedmiller, 2005a; Schäfer et al., 2007) because batch data is utilized repeatedly during the training phase.

FPSRL is a model-based approach, i.e., training is conducted on an environment approximation referred to as world model. Generating a world model from real system data in advance and training a fuzzy policy offline using this model has several advantages. (1) In many real-world scenarios, data describing system dynamics is available in advance or is easily collected. (2) Policies are not evaluated on the real system, thereby avoiding the detrimental effects of executing a bad policy. (3) Expert-driven reward function engineering yielding a closed-form differentiable equation utilized during policy training is not required, i.e., it is sufficient to sample from the system’s reward function and model the underlying dependencies using supervised machine learning.

The remainder of this paper is organized as follows. The methods employed in our framework are reviewed in Sections 24

. Specifically, the problem of finding policies via RL is formalized as an optimization task. In addition, we review Gaussian-shaped membership functions and describe the proposed parameterization approach. Finally, PSO, an optimization heuristic we use for searching for optimal policy parameters, and its different extensions are presented. An overview of how the proposed FPSRL approach is derived from different methods is given in Section 

5.

Experiments using three benchmark problems, i.e., the mountain car (MC) problem, the CP balancing (CPB) task, and the more complex CP swing-up (CPSU) challenge, are described in Section 6. In this section, we also explain the setup process of the world models and introduce the applied fuzzy policies.

Experimental results are discussed in Section 7. The results demonstrate that the proposed FPSRL approach can solve the benchmark problems and is human-readable and understandable. To benchmark FPSRL, we compare the obtained results to those of neural fitted Q iteration (NFQ) (Riedmiller, 2005a, b), an established RL technique. Note that this technique was chosen to describe the advantages and limitations of the proposed method compared to a well-known, widely available standard algorithm.

2 Model-based Reinforcement Learning

In biological learning, an animal interacts with its environment and attempts to find action strategies to maximize its perceived accumulated reward. This notion is formalized in RL, an area of machine learning where the acting agent is not explicitly told which actions to implement. Instead, the agent must learn the best action strategy from the observed environment’s responses to the agent’s actions. For the most common (and most challenging) RL problems, an action affects both the next reward and subsequent rewards (Sutton and Barto, 1998). Examples for such delayed effects are nonlinear change in position when a force is applied to a body with mass or delayed heating in a combustion engine.

In the RL formalism, the agent interacts with the target system in discrete time steps, . At each time step, the agent observes the system’s state and applies an action , where is the state space and is the action space. Depending on and , the system transitions to a new state and the agent receives a real-value reward . Herein, we focus on deterministic systems where state transition and reward can be expressed as functions with and with , respectively. The desired solution to an RL problem is an action strategy, i.e., a policy, that maximizes the expected cumulative reward, i.e., return .

In our proposed setup, the goal is to find the best policy among a set of policies that is spanned by a parameter vector

. Herein, a policy corresponding to a particular parameter value is denoted by . For state , the policy outputs action . The policy’s performance when starting from is measured by the return , i.e., the accumulated future rewards obtained by executing the policy. To account for increasing uncertainties when accumulating future rewards, the reward for future time steps is weighted by , where . Furthermore, adopting a common approach, we include only a finite number of future rewards in the return (Sutton and Barto, 1998), which is expressed as follows:

(1)

The discount factor is selected such that, at the end of the time horizon , the last reward accounted for is weighted by , yielding . The overall state-independent policy performance is obtained by averaging over all starting states

using their respective probabilities

as weight factors. Thus, optimal solutions to the RL problem are with

(2)

In optimization terminology, the policy performance function is referred to as a fitness function.

For many real-world problems, the cost of executing a potentially bad policy is prohibitive. This is why, e.g., pilots learn flying using a flight simulator instead of real aircraft. Similarly, in model-based RL (Busoniu et al., 2010), the real-world state transition function is approximated using a model , which can be a first principle model or created from previously gathered data. By substituting in place of the real-world state transition function in Eq. (1), we obtain a model-based approximation of the true fitness function Eq. (2). In this study, we employ models based on NNs. However, the proposed method can be extended to other models, such as Bayesian NNs (Depeweg et al., 2016) and Gaussian process models (Rasmussen and Williams, 2006).

3 Fuzzy Rules

Fuzzy set theory was introduced by Zadeh (1965). Based on this theory, Mamdani and Assilian (1975) introduced a so-called fuzzy controller specified by a set of linguistic if-then rules whose membership functions can be activated independently and produce a combined output computed by a suitable defuzzification function.

In a -inputs-single-output system with rules, a fuzzy rule can be expressed as follows:

(3)

where denotes the input vector (the environment state in our setting), is the membership of a fuzzy set of the input vector in the premise part, and is a real number in the consequent part.

In this paper, we apply Gaussian membership functions (Wang and Mendel, 1992). This very popular type of membership function yields smooth outputs, is local but never produces zero activation, and forms a multivariate Gaussian function by applying the product over all membership dimensions. We define the membership function of each rule as follows:

(4)

where is the i-th parameterized Gaussian with its center at and width .

The parameter vector , where is the set of valid Gaussian fuzzy parameterizations, is of size and contains

(5)

The output is determined using the following formula:

(6)

where the hyperbolic tangent limits the output to between -1 and 1, and parameter can be used to change the slope of the function.

4 Particle Swarm Optimization

The PSO algorithm is a population-based, non-convex, stochastic optimization heuristic. Generally, PSO can operate on any search space that is a bounded sub-space of a finite-dimensional vector space (Kennedy and Eberhart, 1995).

The position of each particle in the swarm represents a potential solution of the given problem. The particles fly iteratively through the multidimensional search space, which is referred to as the fitness landscape. After each movement, each particle receives a fitness value for its new position. This fitness value is used to update a particle’s own velocity vector and the velocity vectors of all particles in a certain neighborhood.

At each iteration , particle remembers the best local position it has visited so far (including its current position). Furthermore, particle knows the neighborhood’s best position

(7)

found so far by any one particle in its neighborhood (including itself). The neighborhood relations between particles are determined by the swarm’s population topology and are generally fixed, irrespective of the particles’ positions. Note that a ring topology (Eberhart et al., 1996) is used in the experiments described in Section 6.

Let denote the position of particle at iteration . Changing the position of a particle in each iteration is achieved by adding the velocity vector to the particles position vector

(8)

where is distributed uniformly.

The velocity vector contains both a cognitive component and a social component that represent the attraction to the given particle’s best position and the neighborhood’s best position, respectively. The velocity vector is calculated as follows:

(9)

where is the inertia weight factor, and are the velocity and position of particle in dimension , and and are positive acceleration constants used to scale the contribution of the cognitive and social components and , respectively. The factors and

are random values sampled from a uniform distribution to introduce a stochastic element to the algorithm.

The best position of a particle for a maximization problem at iteration is calculated as follows:

(10)

where in our framework is the fitness function given in Eq. (2) and the particle positions represent the policy’s parameters from Eq. (5).

Pseudocode for the PSO algorithm applied in our experiments (Section 6) is provided in A.

5 Fuzzy Particle Swarm Reinforcement Learning

The basis for the proposed FPSRL approach is a data set that contains state transition samples gathered from a real system. These samples are represented by tuples , where, in state , action was applied and resulted in state transition to and yielded reward . can be generated using any (even a random) policy prior to policy training.

Then, we generate world models with inputs to predict , using data set . It is advantageous to learn the differences of the state variables and train a single model per state variable separately to yield better approximative quality:

Then, the resulting state is calculated according to . The reward is also given in ; thus, the reward function can be approximated using .

For the next FPSRL step, an assumption about the number of rules per policy is required. In our experiments, we started with a minimal rule set for each benchmark and calculated the respective performances. Then, we increased the number of rules and compared the performance with those of the policies with fewer rules. This process was repeated until performance with respect to the dynamic models was satisfactory. An intuitive representation of the maximal achievable policy performance given a certain discount factor with respect to a particular model can be computed by adopting a trajectory optimization technique, such as PSO-P (Hein et al., 2016), prior to FPSRL training.

During optimization, each particle’s position in the PSO represents a parameterization of the fuzzy policy . The fitness of a particle is calculated by generating trajectories using the world model starting from a fixed set of initial benchmark states (Section 2). A schematic representation of the proposed FPSRL framework is given in Fig. 1.

Figure 1: Schematic visualization of the proposed FPSRL approach. From left to right: PSO evaluates parameter vectors of a predefined fuzzy rule representation . For each given set of parameters, a model-based RL evaluation is performed by first computing an action vector for state (Eq. (6)). Then, the approximative performance of this tuple is computed by predicting both the resulting state and the transition’s reward using NNs. Repeating this procedure for state and its successor states generates an approximative trajectory through the state space. Accumulating the rewards using Eq. (1), the return is computed for each state, which is eventually used to compute the fitness value , which drives the swarm to high performance policy parameterizations (Eq. (2)). Alternative techniques that could replace PSO and NNs are presented in the background.

Note that we present the results of FPSRL using NNs as world models and PSO as the optimization technique. In the considered problem domain, i.e., continuous, smooth, and deterministic system dynamics, NNs are known to serve as adequate world models. Given a batch of previously generated transition samples, the NN training process is data-efficient and training errors are excellent indicators of how well the model will perform in model-based RL training. Nevertheless, for different problem domains, alternative types of world models might be preferable. For example, Gaussian processes (Rasmussen and Williams, 2006) provide a good approximation of the mean of the target value, and this technique indicates the level of confidence about this prediction. This feature may be of value for stochastic system dynamics. A second alternative modeling technique is the use of regression trees (Breiman et al., 1984). While typically lacking data efficiency, regression tree predictions are less affected by nonlinearities perceived by system dynamics because they do not rely on a closed-form functional approximation.

We employed PSO in our study because the population-based optimizer does not require any gradient information about its fitness landscape. PSO utilizes neighborhood information to systematically search for valuable regions in the search space. Note that gradient-descent based methods or evolutionary algorithms are alternative techniques.

6 Experiments

6.1 Mountain Car

In the MC benchmark, an underpowered car must be driven to the top of a hill (Fig. 2). This is achieved by building sufficient potential energy by first driving in the direction opposite to the final direction. The system is fully described by the two-dimensional state space representing the cars position and velocity .

Figure 2: Mountain car benchmark. The task is to first build up momentum by driving to the left in order to subsequently reach the top of the hill on the right at .

We conducted MC experiments using the freely available software (’clsquare’)111http://ml.informatik.uni-freiburg.de/research/clsquare., which is an RL benchmark system that applies the Runge-Kutta fourth-order method to approximate closed loop dynamics. The task for the RL agent is to find a sequence of force actions that drive the car up the hill, which is achieved when reaching position .

At the start of each episode, the car’s position is initialized in the interval . The agent receives a reward of

(11)

subsequent to each state transition . When the car reaches the goal position, i.e., , its position becomes fixed, the velocity becomes zero, and the agent perceives the maximum reward in each following time step regardless of the applied actions.

6.2 Cart-pole Balancing

The CP experiments described in the following two sections were also conducted using the software. The objective of the CPB benchmark is to apply forces to a cart moving on a one-dimensional track to keep a pole hinged to the cart in an upright position (Fig. 3). Here, the four Markov state variables are the pole angle , the pole angular velocity , the cart position , and the cart velocity . These variables describe the Markov state completely, i.e., no additional information about the system’s past behavior is required. The task for the RL agent is to find a sequence of force actions that prevent the pole from falling over (Fantoni and Lozano, 2002).

Figure 3: Cart-pole benchmark. The task is to balance the pole around while moving the cart to position by applying positive or negative force to the cart.

In the CPB task, the angle of the pole and the cart’s position are restricted to intervals of and respectively. Once the cart has left the restricted area, the episode is considered a failure, i.e., velocities become zero, the cart’s position and pole’s angle become fixed, and the system remains in the failure state for the rest of the episode. The RL policy can apply force actions on the cart from  N to  N in time intervals of  s.

The reward function for the balancing problem is given as follows:

(12)

Based on this reward function, the primary goal of the policy is to avoid reaching the failure state. The secondary goal is to drive the system to the goal state region where and keep it there for the rest of the episode.

Since the CP problem is symmetric around , an optimal action for state corresponds to an optimal action for state . Thus, the parameter search process can be simplified. It is only necessary to search for optimal parameters for one half of the fuzzy policy rules. The other half of the parameter sets can be constructed by negating the membership functions’ mean parameters and the respective output values of the policy’s components. Note that the membership function span width of the fuzzy rules (parameter in Eq. (4)) is not negated because the membership functions must preserve their shapes.

6.3 Cart-pole Swing-up

The CPSU benchmark is based on the same system dynamics as the CPB benchmark. In contrast to the CPB benchmark, the position of the cart and the angle of the pole are not restricted. Consequently, the pole can swing through, which is an important property of CPSU. Since the pole’s angle is initialized in the full interval of , it is often necessary for the policy to swing the pole several times from side to side to gain sufficient energy to erect the pole and receive the highest reward.

In the CPSU setting, the policy can apply actions from  N to  N on the cart. The reward function for the problem is given as follows:

(13)

This is similar to the reward function for the CPB benchmark but does not contain any penalty for failure states, which terminate the episode when reached.

6.4 Neural Network World Models

We conducted policy training on NN world models, which yielded approximative fitness functions (Section 2). For these experiments, we created one NN for each state variable. Prior to training, the respective data sets were split into blocks of 80%, 10%, and 10% (training, validation and generalization sets, respectively). While the weight updates during training were computed by utilizing the training sets, the weights that performed best given the validation sets were used as training results. Finally, those weights were evaluated using the generalization sets to rate the overall approximation quality on unseen data.

The MC NNs were trained with data set containing tuples from trajectories generated by applying random actions on the benchmark dynamics. The start states for these trajectories were uniformly sampled as , i.e., at a random position on the track with zero velocity. The following three NNs were trained to approximate the MC task:

Similarly, for the CP dynamic model state we created the following four networks:

An approximation of the next state is given by the following formula:

(14)

The result of this formula can subsequently be used to approximate the state transition’s reward by

(15)

For the training sets of both CP benchmarks, the samples originate from trajectories of 100 (CPB) and 500 (CPSU) state transitions generated by a random walk on the benchmark dynamics. The start states for these trajectories were sampled uniformly from for CPB and from for CPSU.

We conducted several experiments to investigate the effect of different data set sizes and different network complexities. The results give a detailed impression about the minimum amount of data required to successfully apply the proposed FPSRL approach on different benchmarks and the adequate NN complexity for each data batch size. The experiments were conducted with network complexities of one, two, and three hidden layers with 10 hidden neurons each and arctangent activation functions. For training, we used the Vario-Eta algorithm 

(Neuneier and Zimmermann, 2012). Training the networks can be executed in parallel and only requires a couple of minutes. A detailed overview of the approximation performance of the resulting models, the FPSRL rules created with these models, and a comparison of non-interpretable policies generated by NFQ with the same data sets is given in Tables 1, 2, and 3. The mean squared errors of the normalized output variables (mean=0, standard deviation=1) have been depicted with respect to the generalization data sets.

6.5 Policy Representations

With the proposed FPSRL approach, we search for the parameterization of a fuzzy policy formed by a certain number of rules. The performance of an FPSRL policy is related to the number of rules because more rules generally allow a more sophisticated reaction to system states. On the other hand, a higher number of rules requires more parameters to be optimized, which makes the optimizer’s search problem more difficult. In addition, a complex set of rules tends to be difficult or even impossible to interpret. Thus, we determined that two rules are sufficient for the MC and CPB benchmarks, while adequate performance for the CPSU benchmark is only achievable with a minimum of four rules. The output of the FPSRL policies is continuous, although a semi-discrete output can be obtained by increasing parameter in Eq. (6).

We compared FPSRL policy training and its performance by applying NFQ to the same problems using the same data sets and approximative models. NFQ was chosen because it is a well-established, widely applied, and well-documented RL methodology. We used the NFQ implementation from the RL teachingbox222Freely available at https://sourceforge.net/projects/teachingbox. tool box. In this paper, we did not aim to claim the proposed FPSRL approach is superior to the best RL algorithms in terms of performance; thus, NFQ was selected to show both the degree of difficulty of our benchmarks and the advantages and limitations of the proposed method. Recent developments in deep RL have produced remarkable results with image-based online RL benchmarks (Silver et al., 2014; Van Hasselt et al., 2016), and future studies may reveal that their performance with batch-based offline problems is superior to that of NFQ. Nevertheless, these methods do not attempt to produce interpretable policies.

7 Results

7.1 Mountain Car

We performed 10 NFQ training procedures for the MC benchmark using the setup described in B. After each NFQ iteration, the latest policy was tested on the world model to compute an approximation of the real performance. The policy yielding the best fitness value thus far was saved as an intermediate solution. To evaluate the true performance of the NFQ policies, we computed the true fitness value by applying the policies to the mathematical MC dynamics .

The difficulties in the MC benchmark are discontinuity in the velocity dimension when reaching the goal and the rather long horizon required to observe the effects of the applied actions. With the first problem, it is difficult to model the goal area under the condition of limited samples reaching the goal using a random policy. Training errors lead to a situation where the models do not correctly represent the state transitions at , where the velocity suddenly becomes . Rather, the models learn that yields . Subsequently, during FPSRL training, the evaluation of policy candidates results in a situation where the car is driven to and is kept in this area by applying the correct forces, which leads to high reward transitions. This problem could be solved by incorporating external knowledge about the goal area, which would result in a more convenient NN training process. Here, we explicitly did not want to incorporate expert knowledge about the benchmarks. Instead, we wanted to demonstrate a purely data-driven autonomous learning example. The results given in Table 1 show that, despite these difficulties (even with small data batch sizes), well-performing policies can be learned using both FPSRL and NFQ.

For the MC benchmark and a discount factor (resulting from ), we consider a policy with performance or greater a successful solution to the benchmark problem. A policy with such performance can drive the car up the hill from any initial state in less than 200 time steps.

One way to visualize fuzzy policies is to plot the respective membership functions and analyze the produced output for the sample states. A graphical representation of a policy for the MC benchmark is given in Fig. 4. With some time for consideration, we were able to understand the policy’s outputs for each considered state.

Figure 4: Fuzzy rules for the MC benchmark (membership functions plotted in blue, example state position plotted in red with a gray area for the respective activation grade). Both rules’ activations are maximal at nearly the same position , which implies that the -dimension has minor influence on the policy’s output. This observation fits the fact that, for the MC benchmark, a simplistic but high-performing policy exists, i.e., accelerate the car in the direction of its current velocity. Although this trivial policy yields good performance for the MC problem, better solutions exist. For example, if you stop driving to the left earlier at a certain position, you reach the goal in fewer time steps, which yields a higher average return. The depicted policy implements this advantageous solution as shown in the example section for state .
Data Models Policies
Batch size 1 layer 2 layers 3 layers FPSRL NFQ
1,000 4.67e-5 3.55e-5 3.05e-6 selected -41.98 -43.23
6.97e-3 3.54e-3 7.26e-3 mean -41.99 -44.87
4.54e-1 1.46e-1 1.61e-1 std 0.01 1.33
10,000 1.18e-5 3.34e-7 2.01e-6 selected -42.22 -43.47
4.62e-3 3.48e-3 7.40e-5 mean -42.69 -45.73
1.72e-2 2.54e-4 6.04e-7 std 0.46 2.90
100,000 1.55e-5 1.55e-7 2.88e-7 selected -41.99 -43.12
1.10e-2 3.50e-4 5.15e-5 mean -41.93 -43.28
1.01e-3 2.09e-6 5.85e-8 std 0.11 1.22
Table 1: MC results (left to right): (1) data: number of state transitions, obtained from random trajectories on the benchmark dynamics; (2) models: generalization errors of the best NN models we were able to produce given a certain amount of data and pre-defined network complexity; (3) policies: performance with the real benchmark dynamics of different policy types trained/selected according to the performance using the models from the left. For each policy setting, 10 training experiments were performed to obtain statistically meaningful results. The presented results for different data batch sizes show that the MC benchmark dynamics are rather easy to model using NNs. In addition, models having significantly greater errors with the generalization sets were still sufficient for training a fuzzy policy using FPSRL and selecting a well-performing policy from NFQ.

7.2 Cart-pole Balancing

The CPB benchmark has two different discontinuities in its dynamics, which make the modeling process more difficult compared to the MC benchmark case. The first discontinuity occurs when the cart leaves the restricted state space and ends up in the failure state, i.e., as soon as or , the cart becomes fixed at its current position (both velocities and become zero). The second discontinuity appears when the cart enters the goal region. In this region, the reward switches from to , which is a rather small change compared to the failure state reward of . In addition to the difficulty in modeling discrete changes with NNs, this task becomes even more complicated if samples for these transitions are rare. In contrast to the difficulties in modeling the benchmark dynamics, a rather simple policy can balance the pole without leaving the restricted state space. With the discount factor (), we consider policies that yield a performance of or greater as successful.

The task for FPSRL was to find a parameterization for two fuzzy rules. Here, we used 100 particles and an out-of-the-box PSO setup (B). The training employed 1,000 start states that were uniformly sampled from (Table 2). Note that a data batch size of 100,000 sample transitions was required to build models with adequate approximation quality for training a model-based RL policy. Models trained with 1,000 or 10,000 sample transitions could not correctly approximate the effects that occurred when entering the failure-state area. Further, they incorrectly predicted possibilities to escape the failure state and to balance the pole in subsequent time steps. The model-based FPSRL technique exploited these weaknesses and produced policies that perform well with the models but demonstrated poor performance with the real benchmark dynamics.

A visual representation of one of the resulting fuzzy policies is given in Fig. 5. This example illustrates a situation where the potential problems of a policy can be observed via visual inspection, which is a significant advantage of interpretable policies.

In contrast to FPSRL, NFQ could produce well-performing non-interpretable policies even with small data batch sizes. Note that the same weak models used for FPSRL training were used to determine which NFQ iteration produced the best policy during NFQ training with 1,000 episodes. In our experiments, we observed that even models with bad approximative quality when simulating the benchmark dynamics are useful for NFQ policy selection because NFQ training never observed the models during training and therefore was not prone to exploiting their weaknesses.

Figure 5: Fuzzy rules for the CPB benchmark. The visualization of a fuzzy policy can be useful for revealing the weaknesses of a given set of rules. For example, despite the presented policy being the result of successful FPSRL training and yielding high average returns for all test states, states at the boundary of the allowed cart position would result in failed episodes. The depicted example shows that state would activate rule 1; thus, it would accelerate the car even further positive and eventually end up in a failure state. Therefore, by examining only the rule set, the elementary weaknesses of the policy can be identified and the test state set can be adapted appropriately.
Data Models Policies
Batch size 1 layer 2 layers 3 layers FPSRL NFQ
1,000 1.57e-7 1.37e-7 1.07e-7 selected -9.03 -1.35
4.62e-2 6.03e-2 8.51e-3 mean -14.59 -1.82
4.32e-8 8.29e-8 1.29e-7 std 5.34 0.54
4.33e-2 1.33e-2 1.03e-1
2.09e-2 2.58e-2 1.11e-2
10,000 5.95e-9 3.79e-8 2.84e-8 selected -3.29 -0.99
3.68e-2 1.07e-2 5.08e-3 mean -3.30 -1.18
9.98e-9 8.12e-7 4.82e-8 std 0.02 0.23
5.18e-2 4.16e-2 4.02e-2
1.22e-2 4.75e-4 6.17e-4
100,000 5.73e-9 2.43e-8 2.69e-8 selected -1.31 -1.81
3.55e-2 1.25e-2 9.93e-3 mean -1.31 -2.03
2.91e-8 3.44e-8 1.41e-7 std 8.97e-4 0.24
2.83e-2 2.43e-2 1.30e-2
5.86e-3 1.08e-4 9.03e-4
Table 2: Cart-pole balancing results. The experiments show that the modeling process of variables containing nonlinearities is difficult and requires an adequate amount of sample data. Since both the pendulum and the cart velocities suddenly become zero if the failure state is reached, the modeling process requires a certain number of these events in the training data to correctly model this effect. The results for a batch size of 1,000 show that a model that is not applicable to model-based RL can still be used for policy selection for a model-free RL technique such as NFQ. As the models’ errors reduce with increasing data batch sizes, FPSRL becomes increasingly capable of finding well-performing interpretable policies. Note that we encountered an effect that reduced NFQ performance can occur even if the data batch size increases.

7.3 Cart-pole Swing-up

Compared to the MC and CPB benchmarks, the results for the CPSU benchmark show a completely different picture in terms of performance and the training process. Despite CPB and CPSU sharing the same underlying mathematical transition dynamics, they differ in the following two important aspects. First, discontinuities in the state transitions do not occur owing to the absence of a failure state area. Second, the planning horizon for a successful policy is significantly higher. While the latter makes it particularly difficult to find a solution by applying standard NFQ, the first property makes CPSU a good example of the strength of the proposed FPSRL approach.

NFQ’s performance decreased dramatically for the CPSU problem (Table 3). For this benchmark with (), solutions with or greater with a set of 1,000 benchmark states uniformly sampled from were considered successful. The policies exhibiting such performance can swing-up more than of the given test states. In our experiments, none of the NFQ trainings produced such a successful policy.

In contrast, the proposed FPSRL could find a parameterization for successful policies using four fuzzy rules by assessing their performance on world models trained with data batch sizes of 10,000 or greater. For a data batch size of 1,000, the transition samples containing the goal area reward were far too few in the data set to model this area correctly. However, the extremely high errors obtained with the generalization set during model training are excellent indicators of this weakness.

Figure 6 shows how even more complex fuzzy policies can be visualized and help make RL policies interpretable.

Figure 6: Fuzzy rules for CPSU benchmark. Even with four rules, fuzzy policies can be visualized in an easily interpretable way. By inspecting the prototype cart-pole diagrams for each rule, two basic concepts can be identified for accelerating in each direction. First (Rule 1 (4)): the cart’s position is on the left (right) and moving further to the left (right), while the pole is simultaneously falling on the right (left). Then the cart is accelerated towards the right (left). Second (Rule 2 (3)): the cart is between the center and the right (left) and the pole is hanging down. Then the cart is accelerated towards the right (left). Both prototypes are utilized to realize the complex task of swinging the pole up. Balancing of the pole while the cart is centered around is realized via fuzzy interaction of these prototype rules, as shown in the example in the last row.
Data Models Policies
Batch size 1 layer 2 layers 3 layers FPSRL NFQ
1,000 2.02e-4 2.61e-6 3.07e-6 selected -157.49 -153.59
2.93e-3 4.65e-4 5.78e-4 mean -156.53 -156.43
2.27e-5 1.44e-5 1.85e-5 std 2.30 1.64
9.85e-4 9.90e-5 1.13e-3
5.00 5.07 5.06
10,000 3.23e-6 2.17e-6 2.31e-6 selected -34.03 -134.82
9.86e-5 7.88e-5 3.65e-4 mean -53.82 -153.63
3.06e-6 1.48e-6 2.08e-6 std 12.01 6.69
1.13e-5 8.83e-6 3.39e-5
1.68e-1 5.05e-2 9.42e-2
100,000 2.00e-6 3.07e-6 2.56e-6 selected -32.42 -150.93
2.63e-4 5.62e-4 3.38e-4 mean -53.22 -152.66
8.83e-6 1.23e-5 4.81e-5 std 11.17 2.26
2.47e-5 6.46e-5 2.28e-5
1.95e-1 4.76e-3 7.77e-3
Table 3: Cart-pole swing-up results. High errors with the generalization set when training the reward function with a data batch size of less than 10,000 clearly indicate the absence of an adequate number of transition samples that describe the effects noted when reaching the goal area. Smooth and easy dynamics in the other dimensions make it rather easy to model the CPSU dynamics and subsequently use them to conduct model-based RL. Note that the long planning time horizon required in this benchmark made it impossible to learn successful policies with the standard NFQ.

8 Conclusion

The traditional way to create self-organizing fuzzy controllers either requires an expert-designed fitness function according to which the optimizer finds the optimal controller parameters or relies on detailed knowledge regarding the optimal controller policy. Either requirement is difficult to satisfy when dealing with real-world industrial problems. However, data gathered from the system to be controlled using some default policy are available in many cases.

The FPSRL approach proposed herein can use such data to produce high-performing and interpretable fuzzy policies for RL problems. Particularly for problems where system dynamics are rather easy to model from an adequate amount of data and where the resulting RL policy can be expected to be compact and interpretable, the proposed FPSRL approach might be of interest to industry domain experts.

The experimental results obtained with three standard RL benchmarks have demonstrated the advantages and limitations of the proposed model-based method compared with the well-known model-free NFQ approach. However, the results obtained with the CPB problem reveal an important limitation of FPSRL, i.e., training using weak approximation models. The proposed approach can exploit these weaknesses, which can potentially result in poor performance when evaluated using the real dynamics. Modeling techniques that can provide a measure of uncertainty in their predictions, such as Gaussian processes or Bayesian NNs, can possibly overcome these problems. Recent developments in modeling stochastic dynamic systems (Depeweg et al., 2016) may provide an approximation of the mean of the next system state but also compute uncertainty for transitions in the state-action space.

In addition, continuous state and action spaces, as well as long time horizons, do not appear to introduce obstacles to the training of fuzzy policies. The resulting policies obtained with the CPSU benchmark performed significantly better than those generated by the standard NFQ.

However, one of the most significant advantages of the proposed method over other RL methods is the fact that fuzzy rules can be easily and conveniently visualized and interpreted. We have suggested a compact and informative approach to present fuzzy rule policies that can serve as a basis for discussion with domain experts.

The application of the proposed FPSRL approach in industry settings could prove to be of significant interest because, in many cases, data from systems are readily available and interpretable fuzzy policies are favored over black-box RL solutions, such as Q-function-based model-free approaches.

Acknowledgment

The project this report is based on was supported with funds from the German Federal Ministry of Education and Research under project number 01IB15001. The sole responsibility for the report’s contents lies with the authors.

The authors would like to thank Dragan Obradovic and Clemens Otte for their insightful discussions and helpful suggestions.

References

References

  • Bakker (2004)

    Bakker, B., 2004. The state of mind: Reinforcement learning with recurrent neural networks. Ph.D. thesis, Leiden University, Netherlands.

  • Breiman et al. (1984) Breiman, L., Friedman, J., Olshen, R., Stone, C., 1984. Classification and Regression Trees. CRC Press, Boca Raton, FL.
  • Busoniu et al. (2010) Busoniu, L., Babuska, R., De Shutter, B., Ernst, D., 2010. Reinforcement Learning and Dynamic Programming Using Function Approximation. CRC Press.
  • Casillas et al. (2003) Casillas, J., Cordon, O., Herrera, F., Magdalena, L., 2003. Interpretability improvements to find the balance interpretability-accuracy in fuzzy modeling: an overview. In: Interpretability issues in fuzzy modeling. Springer, pp. 3–22.
  • Debnath et al. (2013) Debnath, S., Shill, P., Murase, K., 2013. Particle swarm optimization based adaptive strategy for tuning of fuzzy logic controller. International Journal of Artificial Intelligence & Applications 4 (1), 37–50.
  • Depeweg et al. (2016) Depeweg, S., Hernández-Lobato, J. M., Doshi-Velez, F., Udluft, S., 2016. Learning and policy search in stochastic dynamical systems with bayesian neural networks. arXiv preprint arXiv:1605.07127.
  • Eberhart et al. (1996) Eberhart, R., Simpson, P., Dobbins, R., 1996. Computational intelligence PC tools. Academic Press Professional, Inc., San Diego, CA, USA.
  • Ernst et al. (2005) Ernst, D., Geurts, P., Wehenkel, L., 2005. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research 6, 503–556.
  • Fantoni and Lozano (2002) Fantoni, I., Lozano, R., 2002. Non-linear control for underactuated mechanical systems. Springer.
  • Feng (2005a) Feng, H.-M., 2005a. Particle swarm optimization learning fuzzy systems design. In: Third International Conference on Information Technology and Applications, 2005. ICITA 2005. Vol. 1. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers, pp. 363–366.
  • Feng (2005b) Feng, H.-M., 2005b. Self-generation fuzzy modeling systems through hierarchical recursive-based particle swarm optimization. Cybernetics and Systems 36 (6), 623–639.
  • Gordon (1995) Gordon, G., 1995. Stable function approximation in dynamic programming. In: Proceedings of the Twelfth International Conference on Machine Learning. Morgan Kaufmann, pp. 261–268.
  • Hanafy (2011) Hanafy, T., 2011. Design and validation of real time neuro fuzzy controller for stabilization of pendulum-cart system. Life Science Journal 8 (1), 52–60.
  • Hein et al. (2016) Hein, D., Hentschel, A., Runkler, T., Udluft, S., 2016. Reinforcement learning with particle swarm optimization policy (PSO-P) in continuous state and action spaces. International Journal of Swarm Intelligence Research (IJSIR) 7 (3), 23–42.
  • Jang (1993) Jang, J., 1993. Adaptive-network-based fuzzy inference system. IEEE Transactions on Systems, Man & Cybernetics 23 (3), 665–685.
  • Kennedy and Eberhart (1995) Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. Proceedings of the IEEE International Joint Conference on Neural Networks, 1942–1948.
  • Kharola and Gupta (2014) Kharola, A., Gupta, P., 2014. Stabilization of inverted pendulum using hybrid adaptive neuro fuzzy (ANFIS) controller. Engineering Science Letters 4, 1–20.
  • Kothandaraman and Ponnusamy (2012) Kothandaraman, R., Ponnusamy, L., 2012. PSO tuned adaptive neuro-fuzzy controller for vehicle suspension systems. Journal of Advances in Information Technology 3 (1).
  • Lagoudakis and Parr (2003) Lagoudakis, M., Parr, R., 2003. Least-squares policy iteration. Journal of Machine Learning Research, 1107–1149.
  • Lam and Zhou (2007) Lam, J., Zhou, S., 2007. Dynamic output feedback control of discrete-time fuzzy systems: a fuzzy-basis-dependent lyapunov function approach. International Journal of Systems Science 38 (1), 25–37.
  • Maes et al. (2012) Maes, F., Fonteneau, R., Wehenkel, L., Ernst, D., 2012. Policy search in a space of simple closed-form formulas: towards interpretability of reinforcement learning. Discovery Science, 37–50.
  • Mamdani and Assilian (1975) Mamdani, E., Assilian, S., 1975. An experiment in linguistic synthesis with a fuzzy logic controller. International Journal of Man-Machine Studies 7 (1), 1–13.
  • Neuneier and Zimmermann (2012) Neuneier, R., Zimmermann, H.-G., 2012. How to train neural networks. In: Montavon, G., Orr, G., Müller, K.-R. (Eds.), Neural Networks: Tricks of the Trade, Second Edition. Springer, pp. 369–418.
  • Ormoneit and Sen (2002) Ormoneit, D., Sen, S., 2002. Kernel-based reinforcement learning. Machine learning 49 (2), 161–178.
  • Procyk and Mamdani (1979) Procyk, T., Mamdani, E., 1979. A linguistic self-organizing process controller. Automatica 15, 15–30.
  • Rasmussen and Williams (2006) Rasmussen, C., Williams, C., 2006. Gaussian Processes for Machine Learning. Adaptative computation and machine learning series. University Press Group Limited.
  • Riedmiller (2005a) Riedmiller, M., 2005a. Neural fitted Q iteration — first experiences with a data efficient neural reinforcement learning method. In: Machine Learning: ECML 2005. Vol. 3720. Springer, pp. 317–328.
  • Riedmiller (2005b) Riedmiller, M., 2005b. Neural reinforcement learning to swing-up and balance a real pole. In: Systems, Man and Cybernetics, 2005 IEEE International Conference on. Vol. 4. pp. 3191–3196.
  • Riedmiller et al. (2009) Riedmiller, M., Gabel, T., Hafner, R., Lange, S., 2009. Reinforcement learning for robot soccer. Autonomous Robots 27 (1), 55–73.
  • Saifizul et al. (2006) Saifizul, A., Azlan, C., Mohd Nasir, N., 2006. Takagi-Sugeno fuzzy controller design via ANFIS architecture for inverted pendulum system. In: Proceedings of International Conference on Man-Machine Systems.
  • Schäfer (2008) Schäfer, A. M., 2008. Reinforcement learning with recurrent neural networks. Ph.D. thesis, University of Osnabrück, Germany.
  • Schäfer et al. (2007) Schäfer, A. M., Udluft, S., Zimmermann, H.-G., 2007. A recurrent control neural network for data efficient reinforcement learning. In: Proceedings of IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning. pp. 151–157.
  • Scharf and Mandve (1985) Scharf, E., Mandve, N., 1985. The application of a fuzzy controller to the control of a multi-degree-freedom robot arm. In: Sugeno, M. (Ed.), Industrial Application of Fuzzy Control. North-Holland, pp. 41–62.
  • Schneegass et al. (2007a) Schneegass, D., Udluft, S., Martinetz, T., 2007a. Improving optimality of neural rewards regression for data-efficient batch near-optimal policy identification. In: Proceedings the International Conference on Artificial Neural Networks. pp. 109–118.
  • Schneegass et al. (2007b) Schneegass, D., Udluft, S., Martinetz, T., 2007b. Neural rewards regression for near-optimal policy identification in Markovian and partial observable environments. In: Proceedings the European Symposium on Artificial Neural Networks. pp. 301–306.
  • Shao (1988) Shao, S., 1988. Fuzzy self-organizing controller and its application for dynamic processes. Fuzzy Sets and Systems 26, 151–164.
  • Silver et al. (2014) Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., Riedmiller, M., 2014. Deterministic policy gradient algorithms. In: Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32. ICML’14. JMLR.org, pp. I–387–I–395.
  • Sutton (1988) Sutton, R., 1988. Learning to predict by the methods of temporal differences. Machine learning 3 (1), 9–44.
  • Sutton and Barto (1998) Sutton, R., Barto, A., 1998. Reinforcement learning: an introduction. A Bradford book.
  • Van Hasselt et al. (2016) Van Hasselt, H., Guez, A., Silver, D., 2016. Deep reinforcement learning with double Q-learning. In: 30th AAAI Conference on Artificial Intelligence, AAAI 2016. pp. 2094–2100.
  • Wang and Mendel (1992) Wang, L.-X., Mendel, J., 1992. Fuzzy basis functions, universal approximation, and orthogonal least-squares learning. IEEE Transactions on Neural Networks 3 (5), 807–814.
  • Yang et al. (2013) Yang, H., Li, X., Liu, Z., Hua, C., 2013. Fault detection for uncertain fuzzy systems based on the delta operator approach. Circuits, Systems, and Signal Processing 33 (3), 733–759.
  • Yang et al. (2014a) Yang, H., Li, X., Liu, Z., Zhao, L., 2014a. Robust fuzzy-scheduling control for nonlinear systems subject to actuator saturation via delta operator approach. Information Sciences 272, 158–172.
  • Yang et al. (2014b) Yang, H., Shi, P., Li, X., Li, Z., 2014b. Fault-tolerant control for a class of T-S fuzzy systems via delta operator approach. Signal Process. 98, 166–173.
  • Zadeh (1965) Zadeh, L., 1965. Fuzzy sets. Information and Control 8, 338–353.

Appendix A PSO Algorithm

Algorithm 1 explains in pseudocode the PSO algorithm applied in our experiments.

Data: randomly initialized -dimensional particle positions with and velocities of particle , where Fitness function (Eq. (2)) Inertia weight factor and acceleration constants and Random number generator rand() Search space boundaries and Velocity boundaries and Swarm topology graph defining neighborhood
Result:
  • Global best position

repeat
        foreach Particle  do
               Neighborhood best position of
               particle (Eq. (7));
               ;
              
        end foreach
        Position updates;
        foreach Particle  do
               Determine new velocity of particle (Eq. (9));
               for  do
                      ;
                     
               end for
               Truncate particle ’s velocity;
               for  do
                     
               end for
               Compute new position of particle (Eq. (8));
               ;
               Truncate particle ’s position;
               for  do
                     
               end for
               Personal best positions (Eq. (10));
               if  then
                      Set new personal best position of particle ;
                      ;
                     
               end if
              
        end foreach
       
until Stopping criterion is met;
Determine the global best position;
;
return
Algorithm 1 PSO algorithm. Particle is represented by position , personal best position , and neighborhood best position .

Appendix B Experimental Setup

Table 4 gives a compact overview of the parameters used for the experiments presented herein. Note that extensive parameter studies for both FPSRL and NFQ are beyond the scope of this paper. Nevertheless, we evaluated various parameters known from the literature or our own experience, and we noted that the presented setup was the most successful of all the setups tested in our experiments.

MC CPB CPSU
Benchmark
  State dimensionality 2 4 4
  Time horizon 200 100 500
  Discount factor 0.9851 0.9700 0.9940
FPSRL
  Number of particles 100 100 1,000
  PSO iterations 1,000 1,000 1,000
  PSO topology ring ring ring
  Number of rules 2 2 4
  Rule parameters 11 10 19
  Actions
NFQ
  Q iterations 1,000 1,000 1,000

  NN epochs

300 300 300
  NN layers 3-20-20-1 5-20-20-1 5-20-20-20-1
  NN activation sigmoid sigmoid sigmoid
  Actions
Table 4: Experimental setup.