A complex social system is a collective system composed of a large number of interconnected entities that as whole exhibit properties and behaviors resulted from the interaction of its individual parts Page (2015). Achieving optimal control of a real-world complex social system is important and valuable. For example, in an urban transportation system, a central authority wants to reduce the overall delay and the total travel time by controlling traffic signals using traffic data collected from the Internet of Things Balaji et al. (2010). In an epidemic system, an optimal strategy is pursued to minimize the expected discounted losses resulting from the epidemic process over an infinite horizon, with specific aims such as varying the birth and death rates Lefévre (1981) or early detection of epidemic outbreaks Izadi and Buckeridge (2007). In the sphere of public opinion, components such as information consumers, news media, social websites, and governments aim to maximize their own utility functions with optimal strategies. A specific objective in the public opinion environment is tackling fake news to create a clearer, more trusted information environment Allcott and Gentzkow (2017). These scenarios exemplify the enormous potential applications of a versatile optimal control framework for general complex systems.
However, key characteristics of complex systems make establishing such a framework very challenging. The system typically has a large number of interacting units and thus a high-dimensional state space. State transition dynamics in complex systems are already non-linear, time-variant, and high-dimensional, and then these frameworks must account for additional control variables. Prior research in decision making for complex social systems has generally gone in one of two directions: a simulation or analytical approach Li et al. (2014). Simulation approaches specify system dynamics through a simulation model and develop sampling-based algorithms to reproduce the dynamic flow Wang (2010); Preciado et al. (2013); Yang and Dong (2018); Yang et al. (2018). These approaches can capture the microscopic dynamics of a complex system with high fidelity, but have a high variance and are time-consuming Li et al. (2016). Analytical approaches instead formulate the decision-making problem as a constrained optimization problem with an analytical model through specifying the macroscopic state transitions directly, deriving analytical solutions for optimizing strategies Farajtabar et al. (2015); Timotheou et al. (2015). These approaches can provide a robust solution with less variance, but are applicable only to scenarios with small state spaces Kautz (2004), or cases with low resolution intervention He et al. (2010), due to modeling costs and errors Li et al. (2016). The above research points to a new direction in research opportunities that combines the ability to capture system dynamics more precisely from simulation approaches with the benefit of having less variance and being more robust from analytical approaches. In this paper, we adopted simulation modeling in specifying the dynamics of a complex system, and developed analytical solutions for searching optimal strategies in a complex network with high-dimensional state-action space specified through simulation modeling.
We formulate the problem of decision making in a complex system as a discrete event decision process (DEDP), which identifies the decision making process as a Markov decision process (MDP) and introduces a discrete event model — a kind of simulation modelLaw et al. (1991)
— to specify the system dynamics. A discrete event model defines a Markov jump process with probability measure on a sequence of elementary events that specify how the system components interact and change the system states. These elementary events individually effect only minimal changes to the system, but in sequence together are powerful enough to induce non-linear, time-variant, high-dimensional behavior. Comparing with an MDP which specifies the system transition dynamics of a complex system analytically through a Markov model, a DEDP describes the dynamics more accurately through a discrete event model that captures the dynamics using a simulation process over the microscopic component-interaction events. We will demonstrate this merit through benchmarking with an analytical approach based on a MDP in the domain of transportation optimal control.
To solve a DEDP analytically, we derived a duality theorem that recasts optimal control to variational inference and parameter learning, which is an extension of the current equivalence results between optimal control and probabilistic inference Liu and Ihler (2012); Toussaint and Storkey (2006) in Markov decision process research. With this duality, we can include a number of existing probabilistic-inference and parameter-learning techniques, and integrate signal processing and decision making into a holistic framework. When exact inference becomes intractable, which is often the case in complex systems due to the formidable state space, our duality theorem implies the possibility of introducing recent approximate inference techniques to infer complex dynamics. The method in this paper is an expectation propagation algorithm, part of a family of approximate inference algorithms with local marginal projection. We will demonstrate that our approach is more robust and has less variance in comparison with other simulation approaches in the domain of transportation optimal control.
This research makes several important contributions. First, we formulate a DEDP — a general framework for modeling complex system decision-making problems — by combining MDP and simulation modeling. Second, we reduce the problem of optimal control to variational inference and parameter learning, and develop an approximate solver to find optimal control in complex systems through Bethe entropy approximation and an expectation propagation algorithm. Finally, we demonstrate that our proposed algorithm can achieve higher system expected rewards, faster convergence, and lower variance of value function within a real-world transportation scenario than even state-of-the-art analytical and sampling approaches.
In this section, we review the complex social system, the discrete event model, the Markov decision process, and the variational inference framework for a probabilistic graphical model.
2.1. Complex Social System
A complex social system is a collective system composed of a large number of interconnected entities that as whole exhibit properties and behaviors resulted from the interaction of its individual parts. A complex system tends to have four attributes: Diversity, interactivity, interdependency, and adaptivity Page (2015). Diversity means the system contains a large number of entities with various attributes and characteristics. Interactivity means the diverse entities interact with each other in an interaction structure, such as a fixed network or an ephemeral contact. Interdependency means the state change of one entity is dependent on others through the interactions. Adaptivity means the entities can adapt to different environments automatically.
In this paper, we temporarily exclude the attribute of adaptivity, the study of which will be future work. We focus on studying the optimal control of a complex social system with the attributes of diversity, interactivity, and interdependency, which leads to a system containing a large number of diverse components, the interactions of which lead to the states change. Examples of complex social systems include the transportation system where the traffic congestions are formed and dissipated through the interaction and movement of individual vehicles, the epidemic system where the disease is spread through the interaction of different people, and the public opinion system where people’s minds are influenced and shaped by the dissemination of news through social media.
2.2. Discrete Event Model
A discrete event model defines a discrete event process, also called a Markov jump process. It is used to specify complex system dynamics with a sequence of stochastic events that each changes the state only minimally, but when combined in a sequence induce complex system evolutions. Specifically, a discrete event model describes the temporal evolution of a system with species driven by mutually independent events parameterized by rate coefficients . At any specific time , the populations of the species are .
A discrete event process initially in state at time can be simulated by: (1) Sampling the event according to categorical distribution , where is the rate of event , which equals to the rate coefficients times a total of different ways for the individuals to react, and the rate of all events. The formulation of comes from the formulations of the stochastic kinetic model and stochastic petri net Xu et al. (2016); Wilkinson (2011). (2) Updating the network state deterministically , where represents how an event changes the system states, until the termination condition is satisfied. In a social system, each event involves only a few state and action variables. This generative process thus assigns a probabilistic measure to a sample path induced by a sequence of events happening between times , where is an indicator function.
The discrete event model is widely used by social scientists to specify social system dynamics Borshchev (2013) where the system state transitions are induced by interactions of individual components. Recent research Yang and Dong (2017); Opper and Sanguinetti (2008); Fang et al. (2017) has also applied the model to infer the hidden state of social systems, but this approach has not been explored in social network intervention and decision making.
2.3. Markov Decision Process
A Markov decision process Sutton and Barto (2011) is a framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. Formally, an MDP is defined as a tuple , where represents the state space and the state at time , the action space and the action taken at time , the transition kernel of states such as , the reward function such as Farajtabar et al. (2015) or Wiering et al. (2004) that evaluates the immediate reward at each step, and the discount factor. Let us further define a policy as a mapping from a state to an action or a distribution of it parameterized by — that is, . The probability measure of a MDP trajectory is , where . Solving an MDP involves finding the optimal policy or its associated parameter to maximize the expected future reward — .
The graphical representation of an MDP is shown in Figure 2, where we assume that the full system state can be represented as a collection of component state variables , so that the state space is a Cartesian product of the domains of component state : . Similarly, the action variable can be represented as a collection of action variables , and the action space . Here is not necessarily equal to because represents the state of each component of the system while represents the decisions taken by the system as a whole. For example, in the problem of optimizing the traffic signals in a transportation system where represents the locations of each vehicle and represents the status of each traffic light, the number of vehicles may not necessarily equal to the number of traffic lights . Usually in complex social systems, the number of individual components is much greater than the system decision points .
Prior research in solving a Markov decision process for a complex social system could be generally categorized into simulation or analytical approaches. A simulation approach reproduces the dynamic flow through sampling-based method. It describes the state transition dynamics with a high-fidelity simulation tool such as MATSIM Horni et al. (2016a), which simulates the microscopic interactions of the components and how these interactions leads to macroscopic state changes. Given current state and action at time , a simulation approach uses a simulation tool to generate the next state .
An analytical approach develops analytical solutions to solve a constrained optimization problem. Instead of describing the dynamics with a simulation tool, an analytical approach specifies the transition kernel analytically with probability density functions that describe the macroscopic state changes directly. Given current stateand action
, it computes the probability distribution of the next stateaccording to the state transition kernel . However, approximations are required to make the computation tractable. For an MDP containing binary state variables and D binary action variables, the state space is , the action space is , the policy kernel is a matrix, and the state transition kernel (fixed action) is a matrix. Since is usually much larger than in complex social systems, the complexity bottleneck is usually the transition kernel with size , the complexity of which grows exponentially with the number of state variables. Certain factorizations and approximations must be applied to lower the dimensionality of the transition kernel.
Usually analytical approaches solve complex social system MDPs approximately by enforcing certain independence constraints Sigaud and Buffet (2013). For example, Cheng Cheng et al. (2013) assumed that a state variable is only dependent on its neighboring variables. Sabbadin, Peyrard and Sabbadin Peyrard and Sabbadin (2006); Sabbadin et al. (2012) exploited a mean field approximation to compute and update the local policies. Weiwei approximated the state transition kernel with differential equations Li and Todorov (2004). These assumptions introduces additional approximations that results in modeling errors. In the next section, we propose a discrete event decision process which reduces the complexity of the transition probabilities, and which does not introduce additional independence assumptions.
Two specific approaches of solving an MDP are optimal control Stengel (1994)1998). Optimal control problems consist of finding the optimal decision sequence or the time-variant state-action mapping that maximizes the expected future reward, given the dynamics and reward function. Reinforcement-learning problems target the optimal stationary policy that maximizes the expected future reward while not assuming knowledge of the dynamics or the reward function. In this paper, we address the problem of optimizing a stationary policy to maximize the expected future reward, assuming known dynamics and reward function.
2.4. Variational Inference
A challenge in evaluating and improving a policy in a complex system is that the state space grows exponentially with the number of state variables, which makes probabilistic inference and parameter learning intractable. For example, in a system with binary components, the size of state space will be , let alone the exploding transition kernel. One way to resolve this issue is applying variation inference to optimize a tractable lower bound of the log expected future reward through conjugate duality. Variational inference is a classical framework in the probabilistic graphical model community Wainwright et al. (2008). It exploits the conjugate duality between log-partition function and the entropy function for exponential family distributions. Specifically, it solves the variational problem , where and are respectively canonical parameters and sufficient statistics of an exponential family distribution, is an auxiliary distribution and is the entropy of . For a tree-structured graphical model (here we use the simplified notation of a series of ), the Markov property admits a factorization of into the product and division of local marginals . Substituting the factored forms of into the variational target, we get an equivalent constrained optimization problem involving local marginals and consistency constraints among those marginals, and a fixed-point algorithm involving forward statistics and backward statistics . There are two primary forms of approximation of the original variational problem: the Bethe entropy problem and a structured mean field. The Bethe entropy problem is typically solved by loopy belief propagation or an expectation propagation algorithm.
In this section, we develop a DEDP and present a duality theorem that extends the equivalence of optimal control with probabilistic inference and parameter learning. We also develop an expectation propagation algorithm as an approximate solver to be applied in real-world complex systems.
3.1. Discrete Event Decision Process
The primary challenges in modeling a real-world complex system are the exploding state space and complex system dynamics. Our solution is to model the complex system decision-making process as a DEDP.
The graphical representation of a DEDP is shown in Figure 2. Formally, a DEDP is defined as a tuple , where is the state space and
a M-dimensional vector representing the state of each component at time, is the action space and a D-dimensional vector representing the action taken by the system at time . As before, is not necessarily equal to , and is usually much larger than in complex social systems. Both and could take real or categorical values depending on the applications.
is the set of events and a scalar following categorical distributions indicating the event taken at time and changing the state by . is the function mapping actions to event rate coefficients which takes a -dimensional vector as input, and outputs a -dimensional vector , and is the transition kernel of states induced by events , where represents the probability of an event happened at time :
Following the definitions in the discrete event model, is the probability for event to happen, which equals to the rate coefficients times a total of ways that the components can interact to trigger an event.
The immediate reward function is a function of system states, defined as the summation of reward evaluated at each component , and is the discount factor. We further define a policy as a mapping from a state to an action or a distribution of it parameterized by — that is, . The parameterized policy can take any form, such as a lookup table where
represents values of each entries in the table, a Gaussian distribution where
represents the mean and variance, or a neural network whererepresents the network weights. Solving a DEDP involves finding the optimal policy or its associated parameter to maximize the expected future reward — . The probability measure of a DEDP trajectory with a stochastic policy is as follows, where is an indicator function and :
The probability measure of that with a deterministic policy is this:
A DEDP makes a tractable representation of complex system control problem by representing the non-linear and high-dimensional state transition kernel with microscopic events. A vanilla MDP is an intractable representation because the state-action space grows exponentially with the number of state-action variables. In comparison, the description length of a DEDP grows linearly in the number of events. As such, a DEDP greatly reduces the complexity of specifying a complex system control problem through introducing an auxiliary variable (event), and can potentially describe complex and high-fidelity dynamics of a complex system.
A DEDP could be reduced to an MDP if marginalizing out the events
Thus, the only difference between a DEDP and an MDP is that a DEDP introduces events to describe the state transition dynamics. Compared with the aforementioned models introducing independence constraints to make MDPs tractable, a DEDP does not make any independence assumptions. It defines a simulation process that describes how components interact and trigger an event, and how the aggregation of events leads to system state changes. In this way, a DEDP captures the microscopic dynamics in a macroscopic system, leading to more accurate dynamics modeling.
3.2. Duality Theorem on Value Function
To solve a DEDP with a high-dimensional state and action space, we derive the convex conjugate duality between the log expected future reward function and the entropy function of a distribution over finite-length DEDP trajectories, and the corresponding duality in the parameter space between the log discounted trajectory-weighted reward and the distribution of finite-length DEDP trajectories. As a result, we can reduce the policy evaluation problem to a variational inference problem that involves the entropy function and can be solved by various probabilistic inference techniques.
Specifically, in a complex system decision process
, let be a discrete time, the component index, the trajectory of a DEDP starting from initial state , and the expected future reward (value function), where is a discount factor. Define as a proposal joint probability distribution over finite DEDP trajectories, as the discounted trajectory-weighted reward component where is the probability distribution of a trajectory with policy , and as the reward evaluated at component at time with state . We thus have the following duality theorem.
In the above, equality is satisfied when and . As such, is the convex conjugate of . is a convex function of , so by property of the convex conjugate, is also a conjugate of . The proof for this theorem is shown in the Appendix.
Theorem 1 provides a tight lower bound of the log expected reward and, more importantly, defines a variational problem in terms analogous to well-known variational inference formulations in the graphic model community Wainwright et al. (2008), where a number of variational inference methods can be introduced such as exact inference methods, sampling-based approximate solutions, and variational inference. Theorem 1 extends the equivalence between optimal control and probability inference in recent literatures to a general variational functional problem. Specifically, it gets rid of the probability likelihood interpretation of the value function in Toussaint et al. (2010); Toussaint and Storkey (2006), and the prior assumption that value function is in the multiplication form of local-scope value functions Liu and Ihler (2012).
3.3. Expectation Propagation for Optimal Control
Theorem 1 implies a generalized policy-iteration paradigm around the duality form: solving the variational problem as policy evaluation and optimizing the target over parameter with a known mixture of finite-length trajectories as policy improvement. In the following, we develop the policy evaluation and improvement algorithm with a deterministic policy , the derivation for which is given in the Appendix. The stochastic policy case will lead to a similar result, which is not presented here due to the limit of space.
In policy evaluation, the Markov property of a DEDP admits factorizations
where is the length prior distribution to match the discount factor, and and are locally consistent one-slice and two-slice marginals. To cope with the exploding state space, we apply the Bethe entropy approximation. Specifically, we relax the formidable searching state space of into an amenable space through the mean field approximation , where is the one-slice marginal involving only component . Applying the factorization and approximation, we get the following Bethe entropy problem (let represent the summation over all value combinations of except a fixed ).
We solve this with the method of Lagrange multipliers, which leads
to a forward-backward algorithm that updates the forward messages
and backward messages
marginally according to the average effects of all other components — that is, a projected marginal kernel . Therefore, the algorithm achieves linear complexity over the number of components for each and , and quadratic time complexity over time horizon to compute all messages for all . To further lower down the time complexity, we define ()=q(T,m)() by gathering together backward messages sharing and by noting that doesn’t depend on . This leads to the following forward-backward algorithm, which is linear in time horizon:
In policy improvement, we maximize the log expected future reward function over parameter with inferred from via gradient ascent update , or more aggressively by setting so that . This objective can be simplified by dropping irrelevant terms and keeping only those involving policy:
. The gradient is obtained from chain rule and messages, through dynamic programming:
In summary, we give our optimal control algorithm of complex systems as Algorithm 1.
In the above we developed a DEDP for modeling complex system decision making problems, and derived a expectation propagation algorithm for solving the DEDP. our algorithm is also applicable on a Markov decision process with other simulation models. While we used a discounted expected total future reward in the previous derivation, our framework is also applicable to other types of expected future reward, such as a finite horizon future reward, where we use a different probability distribution of time .
In this experiment, we benchmark algorithm 1 against several decision-making algorithms for finding the optimal policy in a complex system.
Overview: The complex social system in this example is a transportation system. The goal is to optimize the policy such that each vehicle arriving at the correct facilities at correct time (being at work during work hours and at home during rest hours) and spending minimum time on roads. We formulate the transportation optimal control problem as a discrete event decision process . The state variables represent the populations at locations and the current time . All events are of the form —an individual moving from location to location with rate (probability per unit time) —decreasing the population at by one and increasing the population at by one. We also introduce auxiliary event that doesn’t change any system state, and set the rates of leaving facilities and selecting alternative downstream links as action variables. We implement the state transition following the fundamental diagram of traffic flow Horni et al. (2016b) that simulate the movement of vehicles. The reward function emulates the Charypa-Nagel scoring function in transportation research Horni et al. (2016b) to reward performing the correct activities at facilities and penalize traveling on roads , where and are the score coefficients. We implement the deterministic policy as a function of states through a neural network and apply Algorithm 1 to find the optimal policy parameter .
Benchmark Model Description: The transition kernel based on an MDP in this general form in a complex system is too complicated to be modeled exactly with an analytical form, due to the high-dimensional state-action space, and the complex system dynamics. As such, we benchmark with the following algorithms: (1) Analytical approaches based on an MDP that uses Taylor expansion to approximate the intractable transition dynamics with differential equations, which leads to a guided policy search (GPS) algorithm Montgomery and Levine (2016)
with a known dynamics that uses iterative linear quadratic regulator (iLQR) for trajectory optimization and supervised learning for training a global policy, implemented as a five-layer neural network. Other aforementioned approximations Peyrard and Sabbadin (2006); Sabbadin et al. (2012); Cheng et al. (2013) are not applicable because in their settings each component takes an action, resulting in local policies for each component. Whereas in our problem the action is taken by the system as a whole, and therefore no local policies. (2) Simulation approaches that reproduce the dynamic flow through sampling the state action trajectories from the current policy and the system transition dynamics, which leads to a policy gradient (PG) algorithm, the policy of which is implemented as a four-layer neural network; and (3) an actor-critic (AC) algorithm Lillicrap et al. (2015) that implements the policy as an actor network with four layers and the state-action value function as a critic network with five layers.
Performance and Efficiency: We benchmark the algorithms using the SynthTown scenario (Fig. 3), which has one home facility, one work facility, 23 road links, and individuals going to work facility in the morning (9 am) and home facility in the evening (5 pm). A training episode is one day. This scenario is small enough for studying the details of different algorithms.
In Figure 4
, the value-epoch curve of VI (our algorithm) dominates those of the other algorithms almost everywhere. Table1 indicates that VI requires the fewest training epochs to converge to the highest total rewards per episode. Figure 5 presents the average vehicle distribution of ten runs at different locations (Home (h), Work (w), and roads 1-23) using the learned policy with each algorithm. This figure implies that the learned policy of VI leads to the largest amount of vehicles being at work during work hours (9 am to 5 pm), and least amount of time the vehicles spending on roads.
In the SynthTown scenario, high rewards require the individuals to perform the correct activities (home, work, and so on) at the right time, and to spend less time on roads. VI achieves the best performance when evaluating a policy by considering the whole state-action space with VI approximation — the evolution of each state variables in the mean field of the other state variables. Modeling the complex system transition dynamics based on an MDP analytically with Taylor approximation will introduce modeling errors (GPS). Value estimation from Monte Carlo integration in high-dimensional state space has high variance (PG). A small perturbation of policy will result in significant change to the immediate reward and value in later steps, which makes it difficult to estimate the correct value from sampled trajectories, and as a result difficult to compute the correct value gradient (AC).
We also benchmark the performance of all algorithms using the Berlin scenario, which consists of a network of 1,530 locations and the trips of synthesized individuals Horni et al. (2016b). Table 1 shows that VI outperformed in total rewards per episode, while the other algorithms did not even converge in a reasonable number of epochs.
In summary, VI outperformed guided policy search, policy gradient, and actor-critic algorithms in all scenarios. These benchmarking algorithms provide comparable results in a small dataset such as the SynthTown scenario, but became difficult to train when applied to a larger dataset such as Berlin.
5. Related Works
A number of prior works have explored the connection between decision making and probabilistic inference. The earliest such research is the Kalman duality proposed by Rudolf Kalman Kalman (1960). Subsequent works have expanded the duality in stochastic optimal control, MDP planning, and reinforcement learning. Todorov and Kappen expanded the connection between control and inference by restricting the immediate cost and identifying the stochastic optimal control problems as linearly-solvable MDPs Todorov (2007) or KL control problems Kappen et al. (2012). However, the passive dynamics in a linearly-solvable MDPs and the uncontrolled dynamics in a KL control problem become intractable in a social system scenario due to the complicated system dynamics. Toussaint, Hoffman, David, and colleagues broadened the connection by framing planning in an MDP as inference in a mixture of PGMs Toussaint and Storkey (2006). The exact Furmston and Barber (2012) and variational inference Furmston and Barber (2010) approaches in their framework encounter the transition dynamics intractability issue in a social network setting due to the complicated system dynamics. Their sampling-based alternatives Hoffman et al. (2009) experience the high-variance problem in a social network due to the exploding state space. Levine, Ziebart, and colleagues widened the connection by establishing the equivalence between maximum entropy reinforcement learning, where the standard reward objective is augmented with an entropy term Ziebart (2010), and probabilistic inference in a PGM Ziebart (2010); Levine (2018). They optimize a different objective function compared with our method. Our approach is an extension of Toussaint’s formulation Toussaint and Storkey (2006), but differs in that we establish the duality with a DEDP, and provide new insights into the duality from the perspective of convex conjugate duality. Compared with the existing exact inference Furmston and Barber (2012) and approximate inference solutions Hoffman et al. (2009); Furmston and Barber (2010), our algorithm is more efficient with gathering the messages, and more scalable with the use of Bethe entropy approximation.
Other formulations of the decision-making problems which are similar to our framework are the Option network Sutton et al. (1999) and the multi-agent Markov decision process (MMDP) Sigaud and Buffet (2013). An option network extends a traditional Markov decision process with options — closed loop policies for taking actions over a period of time, with the goal of providing a temporal abstraction of the representation. In our framework, we introduce an auxiliary variable with the goal of providing a more accurate and efficient modeling of the system dynamics. An MMDP represents sequential decision-making problems in cooperative multi-agent settings. There are two fundamental differences between our framework and MMDP. First, unlike MMDP where each agent takes an action at each time step, in a DEDP the actions are taken by the system and the dimensionality of states not necessarily equals to the dimensionality of actions. Second, unlike MMDP where each agent has its own information, a DEDP models the decision making of a large system where only one controller has all the information and makes decisions for the entire system.
In this paper, we have developed DEDP for modeling complex system decision-making processes. We reduce the optimal control of DEDP to variational inference and parameter learning around a variational duality theorem. Our framework is capable of optimally controlling real-world complex systems, as demonstrated by experiments in a transportation scenario.
7.1. Derivation of Theorem 1
7.2. Derivation of Eq. (1)
Rearranging the terms and applying the approximations described in the main text, the target becomes the following:
7.3. Derivation to Solve Eq. (1)
We solve this maximization problem with the method of Lagrange multipliers.
Taking derivative with respect to and , and setting it to zero, we then get
Marginalizing over , we have