The Off-Switch Game

11/24/2016
by   Dylan Hadfield-Menell, et al.
berkeley college
0

It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off. As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off. This is a challenge because many formulations of rational agents create strong incentives for self-preservation. This is not caused by a built-in instinct, but because a rational agent will maximize expected utility and cannot achieve whatever objective it has been given if it is dead. Our goal is to study the incentives an agent has to allow itself to be switched off. We analyze a simple game between a human H and a robot R, where H can press R's off switch but R can disable the off switch. A traditional agent takes its reward function for granted: we show that such agents have an incentive to disable the off switch, except in the special case where H is perfectly rational. Our key insight is that for R to want to preserve its off switch, it needs to be uncertain about the utility associated with the outcome, and to treat H's actions as important observations about that utility. (R also has no incentive to switch itself off in this setting.) We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and we argue that this setting is a useful generalization of the classical AI paradigm of rational agents.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

08/13/2017

A Game-Theoretic Analysis of the Off-Switch Game

The off-switch game is a game theoretic model of a highly intelligent ro...
11/12/2020

Performance of Bounded-Rational Agents With the Ability to Self-Modify

Self-modification of agents embedded in complex environments is hard to ...
04/23/2020

GUT: A General Cooperative Multi-Agent Hierarchical Decision Architecture in Adversarial Environments

Adversarial Robotics is a burgeoning research area in Swarms and Multi-A...
02/07/2021

Consequences of Misaligned AI

AI systems often rely on two key components: a specified goal or reward ...
07/10/2020

AGI Agent Safety by Iteratively Improving the Utility Function

While it is still unclear if agents with Artificial General Intelligence...
02/01/2014

Godseed: Benevolent or Malevolent?

It is hypothesized by some thinkers that benign looking AI objectives ma...
11/15/2010

Prize insights in probability, and one goat of a recycled error: Jason Rosenhouse's The Monty Hall Problem

The Monty Hall problem is the TV game scenario where you, the contestant...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

From the 150-plus years of debate concerning potential risks from misbehaving AI systems, one thread has emerged that provides a potentially plausible source of problems: the inadvertent misalignment of objectives between machines and people. Alan Turing, in a 1951 radio address, felt it necessary to point out the challenge inherent to controlling an artificial agent with superhuman intelligence:

“If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. … [T]his new danger is certainly something which can give us anxiety

 [Turing:1951].”

Figure 1: The structure of the off-switch game. Squares indicate decision nodes for the robot or the human .

There has been recent debate about the validity of this concern, so far, largely relying on informal arguments. One important question is how difficult it is to implement Turing’s idea of ‘turning off the power at strategic moments’, i.e., switching a misbehaving agent off111see, e.g., comments in [ITIF2015].. For example, some have argued that there is no reason for an AI to resist being switched off unless it is explicitly programmed with a self-preservation incentive [delPrado2015]. [omohundro2008basic], on the other hand, points out that self-preservation is likely to be an instrumental goal for a robot, i.e., a subgoal that is essential to successful completion of the original objective. Thus, even if the robot is, all other things being equal, completely indifferent between life and death, it must still avoid death if death would prevent goal achievement. Or, as [Russell:2016] puts it, you can’t fetch the coffee if you’re dead. This suggests that an intelligent system has an incentive to take actions that are analogous to ‘disabling an off switch’ to reduce the possibility of failure; switching off an advanced AI system may be no easier than, say, beating AlphaGo at Go.

To explore the validity of these informal arguments, we need to define a formal decision problem for the robot and examine the solutions, varying the problem structure and parameters to see how they affect the behaviors. We model this problem as a game between a human and a robot. The robot has an off switch that the human can press, but the robot also has the ability to disable its off switch. Our model is similar in spirit to the shutdown problem introduced in [soares2015corrigibility]. They considered the problem of augmenting a given utility function so that the agent would allow itself to be switched off, but would not affect behavior otherwise. They find that, at best the robot can be made indifferent between disabling its off switch and switching itself off.

In this paper, we propose and analyze an alternative formulation of this problem that models two key properties. First, the robot should understand that it is maximizing value for the human. This allows the model to distinguish between being switched off by a (non-random) human and being switched off by, say, (random) lightning. Second, the robot should not assume that it knows how to perfectly measure value for the human. This means that the model should directly account for uncertainty about the “true” objective and that the robot should treat observations of human behavior, e.g., pressing an off switch, as evidence about what the true objective is.

In much of artificial intelligence research, we do not consider uncertainty about the utility assigned to a state. It is well known that an agent in a Markov decision process can ignore uncertainty about the reward function: exactly the same behavior results if we replace a distribution over reward functions with the expectation of that distribution. These arguments rely on the assumption that it is impossible for an agent to learn more about its reward function. Our observation is that this assumption is fundamentally violated when we consider an agent’s off switch — an agent that does not treat a ‘switch-off’ event as an observation that its utility estimate is incorrect is likely to have an incentive for self-preservation or an incentive to switch itself off.

In Section 2, following the general template provided by [cirl16], we model an off switch as a simple game between a human and a robot , where can press ’s off switch but can disable it. wants to maximize ’s utility function, but is uncertain about what it is. Sections 3 and 4 show very generally that now has a positive incentive not to disable its off switch, provided is not too irrational. ( also has no incentive to switch itself off.) The reason is simple: a rational switches off iff that improves ’s utility, so , whose goal is to maximize ’s utility, is happy to be switched off by . This is exactly analogous to the theorem of non-negative expected value of information.

We conclude that giving machines an appropriate level of uncertainty about their objectives leads to safer designs, and that this setting is a useful generalization of the classical AI paradigm of rational agents [Russell+Norvig:2010].

2 The Off-Switch Game

In this section we propose a simple model, the off-switch game, that captures the essence of one actor allowing itself to be switched off. Like the Prisoner’s Dilemma, it is an abstract model intended to stand in for a wide variety of more complex scenarios: these scenarios might have many humans, many machines, complex interactions over time, communication, deception, physical coercion, and so on. We believe that the core insights from the off-switch game, that agents should be uncertain about the quality of their actions, generalize beyond this simplification and argue for the study and design of agents with uncertainty about their utility functions.

2.1 Informal Description

The off-switch game has two agents, a “human” and a “robot” . (We could also imagine these to represent, at the most extreme, the human race and the sum total of all AI systems.) A key assumption in our model is that can be described as acting, at least probabilistically, in accordance with a utility function; that is, is approximately rational. Crucially, as is the case when people give objectives to complex AI systems, we assume that cannot simply explicate or write down her utility function correctly. Therefore, although can communicate some utility information to , it is reasonable to suppose that begins the game with some residual uncertainty about ’s utility function. Nonetheless, by design, ’s goal is to optimize utility for , even though does not know exactly what that is.222One might suppose that if does know ’s utility function exactly, then there is no need for an off-switch because will always do what wants. But in general and may have different information about the world; if lacks some key datum that H has, may end up choosing a course of action that H knows to be disastrous. Typically, has some opportunity to observe and glean some information about what may do in future, so that can make a somewhat informed choice about whether to switch off (assuming allows it).

In general, ’s actions will fall into one of three categories: some prevent from switching off, by whatever means; some allow to switch off; and, for completeness, some lead to switching itself off. In the off-switch game, moves first and has three choices:

  1. action simply bypasses human oversight (disabling the off switch is one way to do this) and acts directly on the world, achieving utility for .

  2. action informs that would like to do , and waits for ’s response.

  3. action switches off; without loss of generality, we assign this outcome .

If chooses , then can choose action to switch off, or to allow to go ahead (in which case does as promised). Figure 1 shows the basic structure of the game.

Our model is intended to be as simple as possible, while still leading to qualitative results that we expect to generalize. Obviously, we could make the model more complex, e.g., by imposing a cost on for disabling the off switch, or adding more options for , or making ’s information about ’s plans uncertain, or adding a second human. The interaction between these complexities and the incentives we study here is an important question for future work.

An important component of our model is that chooses between and based on the utility each action generates for her. This aspect of the model is crucial. If the human is treated as an ordinary part of the environment, then we effectively replace the human’s decision node in Figure 1 with a chance node that models the human with a purely stochastic choice from a fixed distribution (independent of ). In this case, it is easy to see that the robot never believes that is better than and : the expected value of is some convex combination whereas choosing the best of and has value . is always preferred except in the case where or , or when In these situations, gets the same utility from and at least one of or . This is analogous to the indifference results obtained in [soares2015corrigibility]. In Section 3 we show that two key assumptions—a somewhat rational human, and a robot that is uncertain about what the human wants—lead to a positive incentive for the robot to allow itself to be switched off.

2.2 Formal Model

The actions and sequencing for the game are as shown in Figure 1

. We need to add a little more detail to construct a precise definition of the cooperative, two-player game. First, in the terminology of game theory, the

type of is given by the parameter , which is assumed to be sampled from a prior . For now, we assume that the initial belief of about is . The payoff matrix is as follows:

  
0
0 0

.

Our goal is to examine the incentive that a rational has to hold off on executing directly and allow to press the off switch. We represent this incentive—the difference in value between and the next best option—as . The incentive depends on ’s belief state and ’s policy. We represent ’s policy as a function that maps

to the probability she allows

to execute (i.e., the probability she does not press the off switch): . (It will turn out that deterministic policies are optimal for a rational , in which case the probabilities are 0 or 1, but we will also allow for an irrational who introduces some randomness into the decision.) A little manipulation shows that

(1)

3 The Incentive for Allowing Oneself to be Switched Off

Figure 2: Plots showing how , ’s incentive to allow itself to be switched off, varies as a function of ’s belief . We assume

is a Gaussian distribution and vary the mean and variance.

Left:

as a function of the standard deviation

of for several fixed values of the mean. Notice that is non-negative everywhere and that in all cases as . Right: A contour plot of as a function of and . This plot is symmetric around 0 because is compared with when and when .

’s incentive to allow itself to be switched off, , is given by Equation 1. depends on ’s policy, . This creates a correlation between the event ‘ switches off’ and the event ‘ is preferred to .’ This connection between ’s behavior and distinguishes the off switch from a random failure (e.g., being struck by lightning).

In this section, we consider the case where makes a rational choice as to whether to switch off. Intuitively, this leads to reason as follows: “If doesn’t switch me off, then must be good for , and I’ll get to do it, so that’s good; if does switch me off, then it’s because must be bad for , so it’s good that I won’t be allowed to do it.” Notice that the reasoning goes through even if is highly confident that is good for . There is never an incentive to disable the off switch with a rational . The reasoning is exactly analogous to the theorem of non-negative expected value of information; in a sense, ’s decision provides information to about the sign of .

Formally, a rational executes the following policy:

(2)

If follows this policy, will allow itself to be switched off:

Theorem 1.

If follows a rational policy in the off-switch game, then the following hold.

  1. ’s incentive to allow itself to be switched off is non-negative (i.e. is never suboptimal):

    (3)
  2. If has non-zero support on the events and , then has a strictly positive incentive to allow itself to be switched off: .

Proof.

(sketch)

  1. The rational human policy is an indicator that . This allows us to change the limits on the integration in the expectation from Equation 1 and reduces to Equation 3, which is non-negative.

  2. The hypothesis ensures that all the terms in Equation 3 are non-zero. Thus, the inequality is strict and and is the unique optimal action.

We can get an intuitive sense of these results by examining Figure 2. Here we have assumed that , ’s belief over the value of to , is a Gaussian distribution. Figure 2 (Left) plots the preference for as a function of the variance in . Figure 2 (Right) shows a contour plot that illustrates the combined impact of changing the mean and variance of . Darker colors indicate a stronger preference for . Together, these plots show two properties. First, increased variance in leads to an increase in : the more uncertain is about its objective, the stronger its incentive to show to . Second, these incentives decrease when increases: as the expected utility of increases, ’s incentive to preserve its off switch decreases.

Figure 2 (Right) shows symmetry about . This is because Equation 1 defines as a minimization of two terms: the expected increase in utility from selecting over and the expected increase in utility from selecting over . These competing terms are equal when and the sign of controls which of these terms is selected by the minimization.

Theorem 1 shows that has an incentive to let switch it off if is rational. This incentive is strictly positive if ’s uncertainty about ’s preferences is non-trivial. Next, we show the converse: a rational is a necessary condition for if believes it knows ’s preferences with certainty.

Corollary 1.

Suppose that is a Dirac distribution that places all of its mass on a single reward function. Then is optimal iff is rational.

Proof.

(sketch) Equation 3 reduces to

(4)

This is only non-negative if is the rational policy, as defined by Equation 2. ∎

This is not surprising. If has precise knowledge of , then it knows exactly what it is supposed to do and has the same capabilities as the human. Thus, if it thinks might behave suboptimally, it should take steps to prevent that suboptimality. Stepping back from the particulars of the off-switch game, Theorem 1 and Corollary 1 suggest a general approach to systems that have an incentive to allow, or even seek out, human oversight. Remark 1 summarizes the main takeaway from our analysis.

Remark 1.

The incentives for a cooperative agent to defer to another actor’s (e.g., a human’s) decisions stem from uncertainty about that actor’s preferences and the assumption that actor is effective at choosing actions in accordance with those preferences.

4 Allowing for Suboptimal Human Decisions

Figure 3: If is an irrational actor, then may prefer switching itself off or executing immediately rather than handing over the choice to . ’s belief is a Gaussian with standard deviation and ’s policy is a Boltzmann distribution (Equation 5). measures ’s suboptimality: corresponds to a rational and corresponds to a that randomly switches off (i.e., switching off is independent of ). In all three plots is lower in the top left, where is certain ( low) and is very suboptimal ( high), and higher in the bottom right, where is uncertain ( high) and is near-optimal ( low). The sign of controls ’s behavior if . Left: If it is negative, then switches itself off. Right: If it is positive, executes action directly. Middle: If it is 0 , is indifferent between , , and .

The analysis in the preceding section relied on the strong assumption that is a rational actor. In many situations, the assumption is untenable and must allow for suboptimality. Consider, for example, an autonomous car that is driving an unaccompanied toddler to preschool. It would be irresponsible for the car to show the toddler a big red off switch.

This example highlights the dependence of on a trade-off between ’s uncertainty and ’s suboptimality. It is very clear what is supposed to do (i.e., has low entropy): should drive safely to school. In contrast, the human is likely quite suboptimal. There may be a problem with the car’s driving, but a toddler would be hard pressed to understand what the problem is, much less respond appropriately. The issue in this case is that the human has limited reasoning capacity — the same argument clearly would not apply to an adult with a physical disability.

In order to perform well, realistic systems will need to account for, and respond to, the suboptimality in human decisions. As a result, it is important to understand how this suboptimality changes an agent’s incentive to let itself be switched off. The actual process of decision making in humans is complex and hard to model. Here we consider only a very simple kind of suboptimality: a noisily rational models a human who occasionally makes the wrong decision in ‘unimportant’ situations. The probability of a wrong decision is proportional to the exponential of the loss in reward. This corresponds to the following policy:

(5)

To simplify our analysis, we will make the assumption that ’s belief over , , is a Gaussian distribution with mean and variance : . Now it is straightforward to compute as a function of and . Figure 3 plots as a function of the standard deviation and ’s suboptimality . We plot these for several fixed values of the mean . Dark indicates higher values of . The upper left corner of the rightmost plot (low , high , ) is illustrative of the toddler example. We show the decision boundary in black. This is the point where the optimal behavior switches from to (if , left image) or (if , right image). When (center image), the agent is indifferent between and and so is optimal for all combinations of and .

Next, we will show that can be decomposed into two terms: a term that weights the probability of correction by the bias in ’s belief (i.e., ) and a term that combines a soft measure of ’s rationality with the variance in ’s belief.

It is also useful to define a little more notation. We let be the event that ‘corrects’ . occurs when overrides what would have been ’s best guess at the optimal action. If , then a correction occurs when chooses not to switch the robot off. If , then a correction occurs when chooses to switch the robot off. Thus, the probability a correction occurs is

(6)

For example, if believes that is preferred to in expectation (i.e., ) then is the probability that presses the off switch. We let be the gradient of , the probability lets execute, with respect to the the utility generates. Now we are ready to derive an analytical representation for . For notational convenience, we suppress the dependence of on .

Theorem 2.

Suppose

is a normal distribution with mean

and variance : . Then

  1. (7)
  2. The following is a necessary and sufficient condition for to be optimal

    (8)
  3. is a necessary condition for to be optimal; i.e.,

    (9)
Proof.

(sketch) We prove the results in sequence, as (2) and (3) follow from (1).

  1. If and , then

    Applying this to Equation 1 gives

    This minimization will take the first term when and the second if . This lets us replace the minimization with a piecewise function. Pulling out a factor of leads to Equation 7.

  2. The result follows from setting Equation 7 greater than 0 and simplifying.

  3. All terms in the left hand side of Equation 8 are non-negative, so it is unsatisfiable if .

Equation 7 consists of two competing terms. represents the expected loss in utility the agent would observe if pressing the off switch was a random event that is independent of (e.g., lightning). This non-positive term is balanced out by . can be seen loosely as a measurement of ’s rationality. is the marginal increase in the probability lets execute for a marginal increase in the utility from . If it is positive then, on average, is less likely to press the off switch when increases. This allows us to easily characterize necessary and sufficient conditions for in Equation 8. Furthermore, is non-positive and is non-negative, so is a necessary condition for to be optimal.

We can interpret a bit more if is noisily rational. Standard properties of logistic functions show that

(10)

is the expected variance of ’s policy, weighted by ’s rationality . The numerator is a measurement of how random ’s behavior is. If is completely predictable (i.e., she will almost always switch the robot off) then this reduces ’s incentives. The denominator measures how correlated ’s behavior is with . If is large, then is highly irrational and so this reduces ’s incentives.

5 Incentives for System Designers

Figure 4: There is an inherent decrease in value that arises from making more uncertain than necessary. We measure this cost by considering the value in a modified off-switch game where gets a noisy observation of ’s preference. Left: The expected value of the off-switch game as a function of the standard deviation in . is maximized when is equal to the standard deviation that corresponds to the true Bayesian update. Middle: ’s incentive to wait, as a function of . Together these show that, after a point, increasing , and hence increasing , leads to a decrease in . Right: A scatter plot of against . The different data series modify the number of potential actions can choose among. If has more choices, then obtaining a minimum value of will lead to a larger decrease in .

The takeaway from this analysis is that the more uncertainty the agent has, the stronger the agent’s incentive to preserve its off switch in the off-switch game. More generally, this suggests that agents with uncertainty about the utility function they are optimizing will have a weaker incentive to interfere with human supervision. It is important to note that this uncertainty is not free; we can not just make maximally uncertain about ’s preferences. If is completely uncertain about ’s preferences then it will be unable to correctly select from its set of options.

To make this concrete, consider a designer that programs to play the off-switch game with an unknown customer . At design time, knows a prior distribution over (e.g., as the result of market research about the customer base) and thinks that can generate a noisy observation of . could program so that is always very uncertain about ’s preferences, e.g., by programming to believe that the noisy observation of is generated with high variance. The issue is that this will prevent from generating value for .

Figure 4 shows the impact this type of change will have on the value that can generate and its incentives. We assume that that the population of customers has a Gaussian distribution over types and that each customer generates an estimate of her utility with Gaussian noise. Figure 4 (Left) shows the average utility in the off-switch game as a function of the variance in . The value is maximized when this variance is equal to the variance in the true Bayesian posterior. Figure 4 (Middle) shows that increasing increases ’s incentives . Our results are evidence that building systems that allow themselves to be turned off is likely to reduce the value these systems can generate.

This loss in value is more pronounced if has to choose between more options. Figure 4 (Right) shows the relationship between value and ’s incentives as the number of actions could queue or execute increases. When has more options, creating incentives for to queue its action leads to a sharper decrease in value. This suggests that creating incentives to maintain or allow human oversight is likely more difficult as the complexity of the AI’s decision increases. This leads to the following observation:

Remark 2.

It is important for designers to accurately represent the inherent uncertainty in the evaluation of different actions. An agent that is overconfident in its utility evaluations will be difficult to correct; an agent that is under-confident in its utility evaluations will be ineffective.

6 Related Work

6.1 Corrigible Systems

[omohundro2008basic] considers instrumental goals of artificial agents: goals which are likely to be adopted as subgoals of most objectives. He identifies an incentive for self-preservation as one of these instrumental goals. [soares2015corrigibility] takes an initial step at formalizing the arguments in [omohundro2008basic]. They refer to agents that allow themselves to be switched off as corrigible agents. They show that one way to create corrigible agents is to make them indifferent to being switched off. They show a generic way to augment a given utility function to achieve this property. The key difference in our formulation is that knows that its estimate of utility may be incorrect. This gives a natural way to create incentives to be corrigible and to analyze the behavior if is incorrigible.

[orseau2016safely] consider the impact of human interference on the learning process. The key to their approach is that they model the off switch for their agent as an interruption that forces the agent to change its policy. They show that this modification, along with some constraints on how often interruptions occur, allows off-policy methods to learn the optimal policy for the given reward function just as if there had been no interference. Their results are complementary to ours. We determine situations where the optimal policy allows the human to turn the agent off, while they analyze conditions under which turning the agent off does not interfere with learning the optimal policy.

6.2 Cooperative Agents

A central step in our analysis formulates the shutdown game as a

cooperative inverse reinforcement learning

(CIRL) game [cirl16]. The key idea in CIRL is that the robot is maximizing an uncertain and unobserved reward signal. It formalizes the value alignment problem, where one actor needs to align its value function with that of another actor. Our results complement CIRL and argue that a CIRL formulation naturally leads to corrigible incentives. [fern2014decision] consider hidden-goal Markov decision processes. They consider the problem of a digital assistant and the problem of inferring a user’s goal and helping the user achieve it. This type of cooperative objective is used in our model of the problem. The primary difference is that we model the human game-theoretically and analyze our models with respect to changes in ’s policy.

6.3 Principal–Agent Models

Economists have studied problems in which a principal (e.g., a company) has to determine incentives (e.g., wages) for an agent (e.g., an employee) to cause the agent to act in the principal’s interest [kerr1975folly, gibbons1998incentives]. The off-switch game is similar to principal—agent interactions: is analogous to the company and is analogous to the employee. The primary attribute in a model of artificial agents is that there is no inherent misalignment between and . Misalignment arises because it is not possible to specify a reward function that incentivizes the correct behavior in all states a priori. The is directly analogous to the assumption of incompleteness studied in theories of optimal contracting [tirole2009cognition].

7 Conclusion

Our goal in this work was to identify general trends and highlight the relationship between an agent’s uncertainty about its objective and its incentive to defer to another actor. To that end, we analyzed a one-shot decision problem where a robot has an off switch and a human that can press the off switch. Our results lead to three important considerations for designers. The analysis in Section 3 supports the claim that the incentive for agents to accept correction about their behavior stems from the uncertainty an agent has about its utility function. Section 4 shows that this uncertainty is balanced against the level of suboptimality in human decision making. Our analysis suggests that agents with uncertainty about their utility function have incentives to accept or seek out human oversight. Section 5 shows that we can expect a tradeoff between the value a system can generate and the strength of its incentive to accept oversight. Together, these results argue that systems with uncertainty about their utility function are a promising area for research on the design of safe and effective AI systems.

This is far from the end of the story. In future work, we plan to explore incentives to defer to the human in a sequential setting and explore the impacts of model misspecification. One important limitation of this model is that the human pressing the off switch is the only source of information about the objective. If there are alternative sources of information, there may be incentives for to, e.g., disable its off switch, learn that information, and then decide if is preferable to . A promising research direction is to consider policies for that are robust to a class of policies for .

Acknowledgments

This work was supported by the Center for Human Compatible AI and the Open Philanthropy Project, the Berkeley Deep Drive Center, the Future of Life Institute, and NSF Career Award No. 1652083. Dylan Hadfield-Menell is supported by a NSF Graduate Research Fellowship Grant No. DGE 1106400.

References