### muzero

A simple implementation of MuZero algorithm for connect4 game

view repo

Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.

READ FULL TEXT VIEW PDFA simple implementation of MuZero algorithm for connect4 game

view repo

None

view repo

None

view repo

My attempt at implementing Deepmind's MuZero (https://arxiv.org/abs/1911.08265)

view repo

Our own reimplementation of MuZero (https://arxiv.org/abs/1911.08265)

view repo

Planning algorithms based on lookahead search have achieved remarkable successes in artificial intelligence. Human world champions have been defeated in classic games such as checkers [schaeffer:chinook], chess [campbell:deep-blue], Go [Silver16AG] and poker [brown2018superhuman, deepstack], and planning algorithms have had real-world impact in applications from logistics [vlahavas2013planning] to chemical synthesis [alphachem]. However, these planning algorithms all rely on knowledge of the environment’s dynamics, such as the rules of the game or an accurate simulator, preventing their direct application to real-world domains like robotics, industrial control, or intelligent assistants.

Model-based reinforcement learning (RL)

[sutton:book] aims to address this issue by first learning a model of the environment’s dynamics, and then planning with respect to the learned model. Typically, these models have either focused on reconstructing the true environmental state [pilco:deisenroth, heess:stochastic_value_gradients, levine:learning_guided_policy], or the sequence of full observations [hafner:planet, kaiser:simple]. However, prior work [state_space_models, hafner:planet, kaiser:simple] remains far from the state of the art in visually rich domains, such as Atari 2600 games [ALE]. Instead, the most successful methods are based on model-free RL [impala, r2d2, apex]– i.e. they estimate the optimal policy and/or value function directly from interactions with the environment. However, model-free algorithms are in turn far from the state of the art in domains that require precise and sophisticated lookahead, such as chess and Go.

In this paper, we introduce *MuZero*, a new approach to model-based RL that achieves state-of-the-art performance in Atari 2600, a visually complex set of domains, while maintaining superhuman performance in precision planning tasks such as chess, shogi and Go.
*MuZero* builds upon *AlphaZero*’s [Silver18AZ] powerful search and search-based policy iteration algorithms, but incorporates a learned model into the training procedure. *MuZero* also extends *AlphaZero* to a broader set of environments including single agent domains and non-zero rewards at intermediate time-steps.

The main idea of the algorithm (summarized in Figure 1) is to predict those aspects of the future that are directly relevant for planning. The model receives the observation (e.g. an image of the Go board or the Atari screen) as an input and transforms it into a hidden state. The hidden state is then updated iteratively by a recurrent process that receives the previous hidden state and a hypothetical next action. At every one of these steps the model predicts the policy (e.g. the move to play), value function (e.g. the predicted winner), and immediate reward (e.g. the points scored by playing a move). The model is trained end-to-end, with the sole objective of accurately estimating these three important quantities, so as to match the improved estimates of policy and value generated by search as well as the observed reward. There is no direct constraint or requirement for the hidden state to capture all information necessary to reconstruct the original observation, drastically reducing the amount of information the model has to maintain and predict; nor is there any requirement for the hidden state to match the unknown, true state of the environment; nor any other constraints on the semantics of state. Instead, the hidden states are free to represent state in whatever way is relevant to predicting current and future values and policies. Intuitively, the agent can invent, internally, the rules or dynamics that lead to most accurate planning.

Reinforcement learning may be subdivided into two principal categories: model-based, and model-free [sutton:book]

. Model-based RL constructs, as an intermediate step, a model of the environment. Classically, this model is represented by a Markov-decision process (MDP)

[puterman:MDP] consisting of two components: a state transition model, predicting the next state, and a reward model, predicting the expected reward during that transition. The model is typically conditioned on the selected action, or a temporally abstract behavior such as an option [sutton:between]. Once a model has been constructed, it is straightforward to apply MDP planning algorithms, such as value iteration [puterman:MDP] or Monte-Carlo tree search (MCTS) [coulom:mcts], to compute the optimal value or optimal policy for the MDP. In large or partially observed environments, the algorithm must first construct the state representation that the model should predict. This tripartite separation between representation learning, model learning, and planning is potentially problematic since the agent is not able to optimize its representation or model for the purpose of effective planning, so that, for example modeling errors may compound during planning.A common approach to model-based RL focuses on directly modeling the observation stream at the pixel-level. It has been hypothesized that deep, stochastic models may mitigate the problems of compounding error [hafner:planet, kaiser:simple]. However, planning at pixel-level granularity is not computationally tractable in large scale problems. Other methods build a latent state-space model that is sufficient to reconstruct the observation stream at pixel level [wahlstrom:pixels_to_torques, watter:embed_to_control], or to predict its future latent states [ha:world_model, gelada:deepmdp], which facilitates more efficient planning but still focuses the majority of the model capacity on potentially irrelevant detail. None of these prior methods has constructed a model that facilitates effective planning in visually complex domains such as Atari; results lag behind well-tuned, model-free methods, even in terms of data efficiency [hado:replay].

A quite different approach to model-based RL has recently been developed, focused end-to-end on predicting the value function [silver:predictron]. The main idea of these methods is to construct an abstract MDP model such that planning in the abstract MDP is equivalent to planning in the real environment. This equivalence is achieved by ensuring *value equivalence*, i.e. that, starting from the same real state, the cumulative reward of a trajectory through the abstract MDP matches the cumulative reward of a trajectory in the real environment.

The predictron [silver:predictron]

first introduced value equivalent models for predicting value (without actions). Although the underlying model still takes the form of an MDP, there is no requirement for its transition model to match real states in the environment. Instead the MDP model is viewed as a hidden layer of a deep neural network. The unrolled MDP is trained such that the expected cumulative sum of rewards matches the expected value with respect to the real environment, e.g. by temporal-difference learning.

Value equivalent models were subsequently extended to optimising value (with actions). TreeQN [farquhar:treeqn] learns an abstract MDP model, such that a tree search over that model (represented by a tree-structured neural network) approximates the optimal value function. Value iteration networks [aviv:vin]

learn a local MDP model, such that value iteration over that model (represented by a convolutional neural network) approximates the optimal value function. Value prediction networks

[oh:vpn] are perhaps the closest precursor toWe now describe the *MuZero* algorithm in more detail. Predictions are made at each time-step , for each of steps, by a model , with parameters , conditioned on past observations and future actions . The model predicts three future quantities: the policy , the value function , and the immediate reward , where is the true, observed reward, is the policy used to select real actions, and is the discount function of the environment.

Internally, at each time-step (subscripts suppressed for simplicity), the model is represented by the combination of a *representation* function, a *dynamics* function, and a *prediction* function. The dynamics function, , is a recurrent process that computes, at each hypothetical step , an immediate reward and an internal state . It mirrors the structure of an MDP model that computes the expected reward and state transition for a given state and action [puterman:MDP]. However, unlike traditional approaches to model-based RL [sutton:book], this internal state has no semantics of environment state attached to it – it is simply the hidden state of the overall model, and its sole purpose is to accurately predict relevant, future quantities: policies, values, and rewards. In this paper, the *dynamics* function is represented deterministically; the extension to stochastic transitions is left for future work. The policy and value functions are computed from the internal state by the prediction function, , akin to the joint policy and value network of *AlphaZero*. The “root” state is initialized using a representation function that encodes past observations, ; again this has no special semantics beyond its support for future predictions.

Given such a model, it is possible to search over hypothetical future trajectories given past observations . For example, a naive search could simply select the step action sequence that maximizes the value function. More generally, we may apply any MDP planning algorithm to the internal rewards and state space induced by the dynamics function. Specifically, we use an MCTS algorithm similar to *AlphaZero*’s search, generalized to allow for single agent domains and intermediate rewards (see Methods). At each internal node, it makes use of the policy, value and reward estimates produced by the current model parameters . The MCTS algorithm outputs a recommended policy and estimated value . An action is then selected.

All parameters of the model are trained jointly to accurately match the policy, value, and reward, for every hypothetical step , to corresponding target values observed after actual time-steps have elapsed. Similarly to *AlphaZero*, the improved policy targets are generated by an MCTS search; the first objective is to minimise the error between predicted policy and search policy . Also like *AlphaZero*, the improved value targets are generated by playing the game or MDP. However, unlike *AlphaZero*, we allow for long episodes with discounting and intermediate rewards by *bootstrapping* steps into the future from the search value, . Final outcomes in board games are treated as rewards occuring at the final step of the episode. Specifically, the second objective is to minimize the error between the predicted value and the value target, ^{1}^{1}1For chess, Go and shogi, the same squared error loss as *AlphaZero* is used for rewards and values. A cross-entropy loss was found to be more stable than a squared error when encountering rewards and values of variable scale in Atari. Cross-entropy was used for the policy loss in both cases.. The reward targets are simply the observed rewards; the third objective is therefore to minimize the error between the predicted reward and the observed reward . Finally, an L2 regularization term is also added, leading to the overall loss:

(1) |

where , , and

are loss functions for reward, value and policy respectively. Supplementary Figure

LABEL:fig:muzero_equations summarizes the equations governing how the