In psychology the state of flow has been linked to optimizing human problem-solving performance . The key condition of flow emergence is a match between the agent’s abilities and complexity of the problem the agent is solving. In people, experiencing flow manifests itself as a feeling of happiness which attracts people to tasks that fully engage their abilities. Thus maximizing flow can be viewed as a guide to improving performance.
In this paper we present a simple computational approach of equipping Artificial Intelligence (AI) agents with a sense of flow. To do so we factor the state in the usual agent-in-an-environment framework  into a self-reflective part (the agent’s abilities) and an objective part (environmental complexity). The AI agent’s control policy is then augmented with a flow-maximizing meta-control module which guides the agent to the areas of the environment where the agent’s abilities match the environmental complexity. There the agent’s base control policy has a potential to perform well.
In the past this approach was applied to Reinforcement Learning (RL) agents where the flow maximization was implemented via, essentially, an additional flow reward signal . The agent then maximized a linear combination of its usual cumulative reward and the expected value of the flow return. The flow return was defined as a reciprocal of the absolute difference between the agent’s ability (a scalar) and the environmental complexity (also a scalar). Both scalar variables were hand-coded into the agent. In a simple synthetic environment, a flow-maximizing RL agent outperformed a baseline .
In this paper we address the two primary limitations of the published work: (i) the assumption of an RL agent architecture and (ii) accurate environmental complexity being hand-coded into the agent. Thus, we apply the flow-maximizing meta control to a broad class of base control policies, extending the applicability beyond RL. Second, we propose a way for the agent to learn the environmental complexity by observing other agents. We illustrate our ideas on a simple synthetic problem and discuss its possible extensions.
2 Problem Formulation
2.1 Restrictions on the Problem
To apply the flow-driven meta control to AI agent architectures beyond RL we will impose certain restrictions on the environment the agents operate in. We represent the environment as a Markov Decision Process (MDP) which consists of a set ofstates , a set of actions and a
transition probabilityfunction . We assume a partitioning of the state set into subsets, or levels, :
The agent’s start state is at level (). At each discrete time step the agent takes an action which brings the agent to the next state . Formally, the state is drawn from according to the transition probability . We denote this as: .
While this formulation allows for episodic as well as non-episodic tasks, in the rest of the paper we work with a special case of this problem: an episodic stochastic shortest path. Specifically, the agent’s task is to reach the highest level quickly and reliably. The agent starts in the start state and runs until either reaching a state at level or dying (i.e., transitioning to a designated state ). We incorporate the death state into the MDP as follows:
2.2 Performance Measure
To quantify the agent’s performance, we will reward the agent with its current level at each state . If the agent dies before it reaches then it forfeits its entire accumulated reward. Thus the agent’s life-time return is:
Suppose an agent reached at time . If happens to be above then we continue to reward the agent with for each time step between and . This allows us to compare two agents as shown in Figure 1. There, the first agent reaches at time and then remains at that level until the time , collecting the reward of at each time step between and . The second agent reaches at time and also receives the reward of for each . The returns the two agents collect are the areas under the level-ascension curves.
Note that while we use rewards to define the agent’s performance we do not assume that the agent has access to the rewards. Thus, we are not restricting the agent architecture to Reinforcement Learning as was done in the past .
2.3 Restrictions on the Agent Design
We consider the problem of meta control by assuming that the agent already has a control policy . We restrict it so that it never moves the agent between levels: . Moving between levels is accomplished with a meta-control policy whose actions either keep the agent in the same state or move it to a state in another level or cause its death: .
Any agent can use both the control policy and the meta-control policy as shown in Algorithm 1. In the main loop (line 1 of the algorithm) a sequence of states ending in either the agent’s death or in the target level is generated by successively applying the meta control (line 1) and the control (line 1).
3 Related Work
Meta-control policies have been an important element of AI since its early days. The classic A* algorithm uses a heuristic to control its search at the base level and breaks ties towards higher-costs at the meta-control level. Pathfinding algorithms often use heuristic search (e.g., A*) as the base control policy but meta-control it with another search [8, 3] or case-based reasoning . Hierarchical control can also be used to solve MDPs more efficiently .
Existing meta-control policies are diverse and specific to the underlying control policy. Thus they cannot always be ported across different base control policies/architectures. We address this shortcoming in the following by suggesting a single simple meta-control policy that explicitly factors the agent’s state into a self-reflective part (the agent’s abilities) and the objective, societally learned, part (the environmental complexity). Doing so de-couples our meta-control approach from the underlying control policy and thus makes it applicable to a broad range of AI architectures.
4 Our Approach
As argued in the introduction, flow-maximizing agents attempt to position themselves in the areas of the environment where their abilities match the complexity of the environment. In our formalization, the areas of the environment are levels and the positioning happens via a meta-control policy which guides the agent to the appropriate level. Hence reasoning about flow happens within meta control.
Generally speaking, giving an agent the ability to position itself in the area of the environment of its own choice may interfere with the agent’s reaching a designer-specified goal. In this study we make an assumption that the environment is such that building up the agent’s abilities at lower levels makes the agent more capable of tackling higher levels, all the way to the goal level . This assumption holds for many common tasks (e.g., sports).
In line with previous work on flow in AI , we define the degree of flow as the quality of the match between the agent’s abilities and the environmental complexity. Then the flow-maximizing meta-control policy guides the agent to the level of the environment for which the agent is currently most suited. There, the control policy has the best chance to maximize its performance. As the agent’s abilities increase over time, flow-maximizing meta-control guides the agent to higher levels of the environment.
The complexity of a level can be determined via social learning: the agent observes performance of other agents which have visited the level before it. The minimum abilities that were sufficient to reach the highest level starting at the given level are then taken as the complexity of that level.
4.2 Algorithmic Details
4.2.1 Agent’s Abilities.
4.2.2 Problem Complexity.
The problem complexity at the level is defined as the minimum agent’s abilities needed to solve the problem (i.e., reach the final level ) from the level with a high probability.
The agent can estimate the complexity of levelby observing other agents at that level and recording their abilities. It then filters out all such agents that did not reach and selects the minimum among the remaining abilities. To illustrate: suppose three agents operated at level . Their abilities were , and . Suppose the last agent died before reaching while the first two survived. The complexity of level is then estimated as per-component minimum of the vectors and which is .
This approach is based on three assumptions. First, we assumed that higher values in the ability vector indicate a higher probability of reaching . Second, we assumed that the complexity of a level is uni-modal and thus has a single vector expressing the required abilities. Third, we assumed that the collected set of other agents’ abilities is sufficient to cover the space of abilities well enough to reliably estimate the required abilities for a level. We will challenge some of these assumptions in the future work section.
4.2.3 Degree of Flow.
The model of flow we use is an extension of existing work . The degree of flow is defined as the divergence between the agent’s abilities and the complexity of the level the agent is at. Mathematically, the degree of flow is where is the Euclidean distance and is a constant to avoid division by zero. reaches its maximum value of when the agent’s abilities match the level complexity precisely. To illustrate: an agent with the ability vector operating at level with the complexity will experience flow to the degree .
4.2.4 Meta-control Policy.
If the agent is in the state , its meta-control policy considers its current level as well as all neighboring levels in the interval where is the radius of the neighbourhood. The policy selects the target level as:
We illustrate our approach by implementing it in a simple testbed. Its synthetic nature gives us a fine control over the environment enabling a clear presentation.
5.1 The Testbed
We consider the agent’s abilities and the environmental complexity to be scalars (i.e., ). Further, we assume that the agent’s ability is simply its age: . We focus exclusively on meta control by having each state to be its own level: with being the only control the agent uses. The probability of dying is defined as:
where is the probability that any agent dies at any given time step, regardless of its abilities and the problem complexity. If the agent is able enough for it level (i.e., ) then is the sole contributor to the probability of dying. If the agent’s abilities are below the level’s complexity (i.e., ) then the probability of dying is the sum of the ambient death probability and the probability of dying from a lack of the abilities: . The sum is capped at by taking the minimum.
If the agent does not die at a given time step then the meta control is able to reliably bring it to any level it specifies.
For the baseline non-flow agents we set the meta-control policy to bring the agent to level where as a parameter. For instance, a baseline agent with will be at level at time step (if it survives that long).
As described earlier in the paper, the flow agents first learn the complexity of each level by observing multiple probe agents. The probe agents behave as a baseline agent parameterized by until a randomly selected level . Then they choose the randomly and follow it until a randomly selected level . At that point they once again randomize their . Effectively these agents are piece-wise linear with two joint points. The complexity at level is then defined as the lowest ability observed at level among any probe agent which went on to reach level . We remove the lowest
percent of the ability data per level as outliers (i.e., the probe agents that did not have the abilities necessary to reachbut reached it nevertheless by luck). Figure 3 compares data-mined and actual complexities of two different environments: and .
Once the complexity is approximated via taking the per-level minima of the recorded probe agent abilities, the flow agents use the flow-maximizing meta policy to advance them through the levels. In effect, the flow-maximizing agents attempt to follow the mined complexity curve.111To simplify the illustration we made the levels continuous. Our meta-control policy tried advancing the agent’s current level in small increments () until it found the maximum of the flow function. The results are found in Table 1.
For the square root complexity curve , the baseline agents were tried with values of the parameter, tabulated from . For each value we ran trials. The flow agents used the data-mined complexity curve from Figure 3 and we also ran trials. The returns were computed until . As per Table 1 the baseline agents achieved the average return of for the best value of . The flow agents outperformed them with the average return of . A single trial of the baseline agent with that value is shown in Figure 4 (left), together with a single trial of the flow agent.
For the quadratic complexity curve the baseline agents were tried with values of the parameter, tabulated from . For each value we ran trials. The flow agents used the data-mined complexity curve from Figure 3 and we also ran trials. The returns were computed until . As per Table 1 the baseline agents achieved the average return of for the best value of . The flow agents outperformed them again with the average return of . A single trial of the baseline agent with that value is shown in Figure 4 (right), together with a single trial of the flow agent.
6 Future Work
By selecting states where the agent’s abilities match the environmental complexity the flow-maximizing agents outperformed the baseline agents. We used this predictable result to illustrate the approach and offer a number of interesting future research directions.
In defining the agent’s ability vector, it will be interesting to try automated feature selection methods to identify relevant features. In defining the level complexity one can attempt to use automated clustering methods to deal with multi-modality. For instance, many video games allow different character classes (e.g., strong and slow versus weak and fast) to be equally successful. By clustering the observed data first and then taking the minimum in each cluster the agent will compute several required ability vectors per level. If data from previous agents are unavailable then the agent can attempt to estimate the level complexity from its own performance at the level. A particular promising direction may be the dynamics of the temporal-difference (TD) error. Alternatively, level complexity can be innate within agents, evolved over generations thereby making flow-maximizing meta control an evolutionary adaptation. Finally, it will be interesting to see how well this model of flow correlates with human flow data.
We proposed a simple psychology-inspired meta-control approach based on matching the agent’s abilities and the environmental complexity. The approach is applicable to a broad spectrum of existing AI control policies. We factored the usual AI agent-environment state into a self-reflective and objective parts and applied social learning to determine the latter. We illustrated the approach with a specific implementation in a simple synthetic testbed.
We are grateful to Thórey Maríusdóttir and Matthew Brown for fruitful discussions. We appreciate funding from NSERC.
-  Bulitko, V., Björnsson, Y., Lawrence, R.: Case-based subgoaling in real-time heuristic search for video game pathfinding. Journal of Artificial Intelligence Research (JAIR) 39, 269–300 (2010)
-  Bulitko, V., Brown, M.: Flow maximization as a guide to optimizing performance: A computational model. Advances in Cognitive Systems 2(239-256) (2012)
-  Bulitko, V., Sturtevant, N., Lu, J., Yau, T.: Graph abstraction in real-time heuristic search. Journal of Artificial Intelligence Research (JAIR) 30, 51–100 (2007)
-  Csikszentmihalyi, M.: Flow: The Psychology of Optimal Experience. Harper Perennial Modern Classics, New York, NY, USA, the first edn. (2008)
-  Isaza, A., Szepesvári, C., Bulitko, V., Greiner, R.: Speeding up planning in Markov decision processes via automatically constructed abstractions. In: In Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence. pp. 306–314 (2008)
-  Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall, third edn. (2010)
-  Schmidhuber, J.: Self-motivated development through rewards for predictor errors / improvements. In: Proc. of Develop. Robotics AAAI Spring Symp. (2005)
-  Sturtevant, N.: Memory-efficient abstractions for pathfinding. In: Proceedings of Artificial Intelligence and Interactive Digital Entertainment. pp. 31–36 (2007)
-  Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge, Massachusetts (1998)