I Introduction
We study the problem of shared control, where a robot shall accomplish a task according to a human operator’s goals and given specifications addressing safety or performance. Such scenarios are for instance found in remotely operated semiautonomous wheelchairs [11]. In a nutshell, the human has a certain action in mind and issues a command. Simultaneously, an autonomy protocol provides—based on the available information—another command. These commands are blended—also referred to as arbitrated—and deployed to the robot.
Earlier work discusses shared control from different perspectives [7, 8, 20, 19, 13, 10], however, formal correctness in the sense of ensuring safety or optimizing performance has not been considered. In particular, having the human as an integral factor in this scenario, correctness needs to be treated in an appropriate way as a human might not be able to comprehend factors of a system and—in the extremal case—can drive a system into inevitable failure.
There are several things to discuss. First, a human might not be sure about which command to take, depending on the scenario or factors like fatigue or incomprehensibility of the problem. We account for uncertainties in human decisions by introducing randomness to choices. Moreover, a means of actually interpreting a command is needed in form of a user interface, , a braincomputer interface; the usually imperfect interpretation adds to the randomness. We call a formal interpretation of the human’s commands the human strategy (this concept will be explained later).
As many formal system models are inherently stochastic, our natural formal model for robot actions inside an environment is a Markov decision process (MDP) where deterministic action choices induce probability distributions over system states. Randomness in the choice of actions, like in the human strategy, is directly carried over to these probabilities when resolving nondeterminism. For MDPs, quantitative properties like “the probability to reach a bad state is lower than
” or “the cost of reaching a goal is below a given threshold” can be formally verified. If a set of such specifications is satisfied for the human strategy and the MDP, the task can be carried out safely and with good performance.Given that the human strategy induces certain critical actions with a high probability, one or more specifications might be refuted. In this case, the autonomy should provide an alternative strategy that—when blended with the human strategy—satisfies the specifications without discarding too much of the human’s choices. As in [8], the blending puts weight on either the human’s or the autonomy protocol’s choices depending on factors such as the confidence of the human or the level of information the autonomy protocol has at its disposal.
The question is now how such a human strategy can be obtained. It seems unrealistic that a human can comprehend an MDP modeling a realistic scenario in the first place; primarily due the possibly very large size of the state space. Moreover, a human might not be good at making sense of probabilities or cost of visiting certain states at all. We employ learning techniques to collect data about typical human behavior. This can, for instance, be performed within a simulation environment. In our case study, we model a typical shared control scenario based on a wheelchair [11] where a human user and an autonomy protocol share the control responsibility. Having a human user solving a task, we compute strategies from the obtained data using
inverse reinforcement learning
[16, 1]. Thereby, we can give guarantees on how good the obtained strategy approximates the actual intends of the user.The design of the autonomy protocol is the main concern of this paper. We define the underlying problem as a nonlinear optimization problem and propose a technique to address the consequent scalability issues by reducing the problem to a linear optimization problem. After an autonomy protocol is synthesized, guarantees on safety and performance can be given assuming that the user behaves according to the human strategy obtained beforehand. The main contribution is a formal framework for the problem of shared autonomy together with thorough discussions on formal verification, experiments, and current pitfalls. A summary of the approaches and an outline are given in Section II.
Shared control has attracted considerable attention recently. We only overview some recent approaches into context with our results. First, Dragan and Srinivasa discussed strategy blending for shared control in [8, 7]. There, the focus was on the prediction of human goals. Combining these approaches, , by inferring formal safety or performance specifications by prediction of human goals, is an interesting direction for future work. Iturrate et al. presented shared control using feedback based on electroencephalography (a method to record electrical activity of the brain) [13], where a robot is partly controlled via error signals from a braincomputer interface. In [19], Trautman proposes to treat shared control broadly as a random process where different components are modeled by their joint probability distributions. As in our approach, randomness naturally prevents strange effects of blending: Consider actions “up” and “down” to be blended with equally distributed weight without having means to actually evaluating these weights. Finally, in [10] a synthesis method switches authority between a human operator and the autonomy such that satisfaction of linear temporal logic constraints can be ensured.
Ii Shared control
Consider first Fig. 1 which recalls the general framework for shared autonomy with blending of commands; additionally we have a set of specifications, a formal model for robot behavior, and a blending function. In detail, a robot is to take care of a certain task. For instance, it shall move to a certain landmark. This task is subject to certain performance and safety considerations, , it is not safe to take the shortest route because there are too many obstacles. These considerations are expressed by a set of specifications . The possible behaviors of the robot inside an environment are given by a Markov decision process (MDP) . Having MDPs gives rise to choices of certain actions to perform and to randomness in the environment: A chosen path might induce a high probability to achieve the goal while with a low probability, the robot might slip and therefore fail to complete the task.
Now, in particular, a human user issues a set of commands for the robot to perform. We assume that the commands issued by the human are consistent with an underlying randomized strategy for the MDP . Put differently, at design time we compute an abstract strategy of which the set of human commands is one realization. This modeling way allows to account for a variety of imperfections. Although it is not directly issued by a human, we call this strategy the human strategy. Due to possible human incomprehensibility or lack of detailed information, this leads to the fact that the strategy might not satisfy the requirements.
Now, an autonomy protocol is to be designed such that it provides an alternative strategy , the autonomous strategy. The two strategies are then blended—according to the given blending function into a new strategy which satisfies the specifications. The blending function reflects preference over either the decisions of the human or the autonomy protocol. We also ensure that the blended strategy deviates only minimally from the human strategy. At runtime we can then blend decisions of the human user with decisions based on the autonomous strategy. The resulting “blended” decisions are according to the blended strategy , thereby ensuring satisfaction of the specifications. This procedure, while involving expensive computations at design time, is very efficient at runtime.
Summarized, the problem we are addressing in this paper is then—in addition to the proposed modeling of the scenario—to synthesize the autonomy protocol in a way such that the resulting blended strategy meets all of the specifications while it only deviates from the human strategy as little as possible. We introduce all formal foundations that we need in Section III. The shared control synthesis problem with all needed formalisms is presented in Section IV as being a nonlinear optimization problem. Addressing scalability, we reduce the problem to a linear optimization problem in Section V. We indicate the feasibility and scalability of our techniques using databased experiments in Section V and draw a short conclusion in Section VII.
Iii Preliminaries
Iii1 Models
A probability distribution over a finite or countably infinite set is a function with . The set of all distributions on is denoted by .
[MDP] A Markov decision process (MDP) is a tuple with a set of states , a unique initial state , a finite set of actions, and a (partial) probabilistic transition function . MDPs operate by means of nondeterministic choices of actions at each state, whose successors are then determined probabilistically with respect to the associated probability distribution. The enabled actions at state are denoted by . To avoid deadlock states, we assume that for all . A cost function for an MDP adds cost to a transition with . A path in an is a finite (or infinite) sequence with for all . If for all , all actions can be disregarded and the MDP reduces to a
discretetime Markov chain (MC)
.The unique probability measure for a set of paths of MC can be defined by the usual cylinder set construction, the expected cost of a set of paths is denoted by , see [2] for details. In order to define a probability measure and expected cost on MDPs, the nondeterministic choices of actions are resolved by socalled strategies. For practical reasons, we restrict ourselves to memoryless strategies, again refer to [2] for details. [Strategy] A randomized strategy for an MDP is a function such that implies . A strategy with for and for all is called deterministic. The set of all strategies over is denoted by . Resolving all nondeterminism for an MDP with a strategy yields an induced Markov chain . Intuitively, the random choices of actions from are transferred to the transition probabilities in . [Induced MC] Let MDP and strategy . The MC induced by and is where
Iii2 Specifications
A quantitative reachability property with upper probability threshold and target set constrains the probability to reach from in to be at most . Expected cost properties impose an upper bound on the expected cost to reach goal states . Intuitively, bad states shall only be reached with probability (safety specification) while the expected cost for reaching goal states has to be below (performance specification). Probability and expected cost to reach from are denoted by and , respectively. Hence, and express that the properties and are satisfied by MC . These concepts are analogous for lower bounds on the probability. We also use until properties of the form expressing that the probability of reaching while not reaching beforehand is at least .
An MDP satisfies both safety specification and performance specification , iff for all strategies it holds that the induced MC satisfies and , i.e., and . If several performance or safety specifications are given MDP , the simultaneous satisfaction for all strategies, denoted by , can be formally verified for an MDP using multiobjective model checking [9].
Here, we are interested in the synthesis problem, where the aim is to find one particular strategy for which the specifications are satisfied. If for and strategy it holds that , then is said to admit the specifications, also denoted by .
Consider Fig. 2(a) depcting MDP with initial state , where states and have choices between actions or and or , respectively. For instance, action induces a probabilistic choice between and with probabilities and . The self loops at and indicate looping back with probability one for each action.
Assume now, a safety specification is given by . The specification is violated for , as the deterministic strategy with and induces a probability of reaching of , see the induced MC in Fig. 2(b). For the randomized strategy with and , which chooses between all actions uniformly, the specification is also violated: The probability of reaching is , hence . However, for the deterministic strategy with and the probability is , thus . Note that minimizes the probability of reaching while maximizes this probability.
Iv Synthesizing shared control protocols
In this section we describe our formal approach to synthesize a shared control protocol in presence of randomization. We start by formalizing the concepts of blending and strategy perturbation. Afterwards we formulate the general problem and show that the solution to the synthesis problem is correct.
Consider Fig. 3, where a room to navigate in is abstracted into a grid. We will use this as our ongoing example. A wheelchair as in [11] is to be steered from the lower left corner of the grid to the exit on the upper right corner of the grid. There is also an autonomous robotic vacuum cleaner moving around the room; the goal is for the wheelchair to reach the exit without crashing into the vacuum cleaner. We now assume that the vacuum cleaner moves according to probabilities that are fixed according to evidence gathered beforehand; these probabilities are unknown or incomprehensible to the human user. To improve the safety of the wheelchair, it is equipped with an autonomy protocol that is to improve decisions of the human or even overwrite them in case of safety hazards. For the design of the autonomy protocol, the evidence data about the cleaner is present.
Now an obvious strategy to move for the wheelchair, not taking into account the vacuum cleaner, is depicted by the red solid line in Fig. 3(a). As indicated in Fig. 3(b), the strategy proposed by the human is unsafe because there is a high probability to collide with the obstacle. The autonomy protocol computes a safe strategy, indicated by the solid line in Fig. 3(b). As this strategy deviates highly from the human strategy, the dashed line indicates a still safe enough alternative which is a compromise or—in our terminology—a blending between the two strategies. We assume in the following that possible behaviors of the robot inside the environment are modeled by MDP . The human strategy is given as randomized strategy for . We explain how to obtain this strategy in Section VI. Specifications are being either safety properties or performance properties .
Iva Strategy blending
Given two strategies, they are to be blended into a new strategy favoring decisions of one or the other in each state of the MDP. In our setting, the human strategy is blended with the autonomous strategy by means of an arbitrary blending function. In [8] it is argued that blending intuitively reflects the confidence in how good the autonomy protocol is able to assist with respect to the human user’s goals. In addition, factors probably unknown or incomprehensible for the human such as safety or performance optimization also should be reflected by such a function.
Put differently, possible actions of the user should be assigned low confidence by the blending function, if he cannot be trusted to make the right decisions. For instance, recall Example IV. At cells of the grid where with a very high probability the wheelchair might collide with the vacuum cleaner, it makes sense to assign a high confidence in the autonomy protocol’s decisions because not all safetyrelevant information is present for the human.
In order to enable formal reasoning together with such a function we instantiate the blending with a statedependent function which at each state of an MDP weighs the confidence in both the human’s and the autonomy’s decisions. A more finegrained instantiation might incorporate not only the current state of the MDP but also the strategies of both human and autonomy or history of a current run of the system. Such a formalism is called linear blending and is used in what follows. In [19], additional notions of blending are discussed.
[Linear blending] Given an MDP , two strategies , and a blending function , the blended strategy for all states , and actions is
Note that the blended strategy is a welldefined randomized strategy. For each , the value represents the confidence in the human’s decisions at this state, , the “weight” of at .
Coming back to Example IV, the critical cells of the grid correspond to certain states of the MDP ; at these states a very low confidence in the human’s decisions should be assigned. For instance at such a state we might have leading to the fact that all randomized choices of the human strategy are scaled down by this factor. Choices of the autonomous strategy are only scaled down by factor . The addition of these scaled choices then gives a new strategy highly favoring the autonomy’s decisions.
IvB Perturbation of strategies
As mentioned before, we want to ensure that the blended strategy deviates minimally from the human strategy. To now measure such a deviation, we introduce the concept of perturbation which was—on a complexity theoretic level—for instance investigated in [5]. Here, we introduce an additive perturbation for a (randomized) strategy, incrementing or decrementing probabilities of action choices such that a welldefined distribution over actions is maintained. [Strategy perturbation] Given MDP and strategy , an (additive) perturbation is a function with
The value is called the perturbation value at state for action . Overloading the notation, the perturbed strategy is given by
IvC Design of the autonomy protocol
For the formal problem, we are given blending function , specifications , MDP , and human strategy . We assume that does not satisfy all of the specifications, , . The autonomy protocol provides the autonomous strategy . According to , the strategies and are blended into strategy , see Definition IVA, , . The shared control synthesis problem is to design the autonomy protocol such that for the blended strategy it holds , while minimally deviating from . The deviation from is captured by finding a perturbation as in Definition IVB, where, , the infinity norm of all perturbation values is minimal.
Our problem involves the explicit computation of a randomized strategy and the induced probabilities, which is inherently nonlinear because the corresponding variables need to be multiplied. Therefore, the canonical formulation is given by a nonlinear optimization program (NLP). We first assume that the only specification is a quantitative reachability property , then we describe how more properties can be included. The program has to encompass defining the autonomous strategy , the perturbation of the human strategy, the blended strategy , and the probability of reaching the set of target states .
We introduce the following specific set of variables:

for each and define the autonomous strategy and the blended strategy .

for each and are the perturbation variables for and .

for each are assigned the probability of reaching from state under strategy .
Using these variables, the NLP reads as follows:
minimize  (1)  
subject to  (2)  
(3)  
(4)  
(5)  
(6)  
(7)  
(8) 
The NLP works as follows. First, the infinity norm of all perturbation variables is minimized (by minimizing the maximum of all perturbation variables) (1). The probability assigned to the initial state has to be smaller than or equal to to satisfy (2). For all target states , the probability of the corresponding probability variables is assigned one (3). Now, to have welldefined strategies and , we ensure that the assigned values of the corresponding strategy variables at each state sum up to one (4). The perturbation of the human strategy resulting in the strategy as in Definition IVB is computed using the perturbation variables (5); in order for the perturbation to be welldefined, the variables have to sum up to zero at each state (6). The blending of and with respect to as in Definition IVA is defined in (7). Finally, the probability to reach from each is computed in (8), defining a nonlinear equation system, where action probabilities, given by the induced strategy , are multiplied by probability variables for all possible successors.
Note that this nonlinear program is in fact bilinear due to multiplying the strategy variables with the probability variables (8). The number of constraints is governed by the number of state and action pairs, , the size of the problem is in .
An assignment of realvalued variables is a function ; it is satisfying for a set of (in)equations, if each one evaluates to . A satisfying assignment is minimizing with respect to objective if for there is no other assignment with . Using these notions, we state the correctness of the NLP in (1) – (8).
[Soundness and completeness] The NLP is sound in the sense that each minimizing assignment induces a solution to the shared control synthesis problem. It is complete in the sense that for each solution to the shared control synthesis there is a minimizing assignment of the NLP. Soundness tells that each satisfying assignment of the variables corresponds to strategies and as well as the perturbation as defined above. Moreover, any optimal solution induces a perturbation minimally deviating from the human strategy . Completeness means that all possible solutions of the shared control synthesis problem can be encoded by this NLP. Unsatisfiability means that no such solution exists; the problem is infeasible.
IvD Additional specifications
We now explain how the NLP can be extended for further specifications. Assume in addition to , another reachability property with is given. We add another set of probability variables for each state ; (2) is copied for and , (3) is defined for all states and (8) is copied for all , thereby computing the probability of reaching under for all states.
To handle an expected cost property for , we use variables being assigned the expected cost for reaching for all . We add the following equations:
(9)  
(10)  
(11) 
First, the expected cost of reaching is smaller than or equal to at (9). Goal state are assigned cost zero (10), otherwise infinite cost is collected at absorbing states. Finally, the expected cost for all other states is computed by (11) where according to the blended strategy the cost of each action is added to the expected cost of the successors. An important insight is that if all specifications are expected reward properties, the program is no longer nonlinear
but a linear program (LP), as there is no multiplication of variables.
IvE Generalized blending
If the problem is not feasible for the given blending function, optionally the autonomy protocol can try to compute a new function for which the altered problem is feasible. We call this procedure generalized blending. The idea is that computing this function gives the designer of the protocol insight on where more confidence needs to be placed into the autonomy or, vice versa, where the human cannot be trusted to satisfy the given specifications.
Computing this new function is achieved by nearly the same NLP as for a fixed blending function while adding variables for each state , defining the new blending function by . We substitute Equation 7 by
(12) 
A satisfying assignment for the resulting nonlinear program induces a suitable blending function in addition to the strategies. If this problem is also infeasible, there is no strategy that satisfies the given specifications for MDP . If there is no solution for the NLP given by Equations 1 – 12, there is no strategy such that . As there are no restrictions on the blending function, this corollary trivially holds: Consider for instance with for each . This function disregards the human strategy which may be perturbed to each other strategy . Reconsider the MDP from Example III2 with specification and the randomized strategy
which takes each action uniformly distributed. As we saw,
. We choose this strategy as the human strategy and as the robot MDP. For a blending function putting high confidence in the human, , if for all , the problem is infeasible.In Table I we display results putting medium (), low (), or no confidence () in the human at and . We list the assignments for the resulting strategies and as well as the probability to reach under the blended strategy . The results were obtained using the NLP solver IPOPT [4].
We observe that for decreasing confidence in the human decisions, the autonomous strategy has higher probabilities for actions and which are the “bad” actions here. That means that—if there is a higher confidence in the autonomy—solutions farer away from the optimum are good enough. The maximal deviation from the human strategy is . Generalized blending with maximizing over the confidence in the human’s decisions at all states yields , , we compute the highest possible confidence in the human’s decisions where the problem is still feasible under the given human strategy.
V Computationally Tractable Approach
The nonlinear programming approach presented in the previous section gives a rigorous method to solve the shared control synthesis problem and serves as mathematically concise definition of the problem. However, NLPs are known to have severe restrictions in terms of scalability and suffer from numerical instabilities. The crucial point to an efficient solution is circumventing the expensive computation of optimal randomized strategies and reducing the number of variables. We propose a heuristic solution which enables to use linear programming (LP) while ensuring soundness.
We utilize a technique referred to as model repair. Intuitively, an erroneous model is changed such that it satisfies certain specifications. In particular, given a Markov chain and a specification that is violated by , a repair of is an automated method that transforms it to new MC such that is satisfied for . Transforming refers to changing probabilities or cost while regarding certain side constraints such as keeping the original graph structure.
In [3], the first approach to automatically repair an MC model was presented as an NLP. Simulationbased algorithms were investigated in [6]. A heuristic but very scalable technique called local repair was proposed in [17]. This approach greedily changes the probabilities or cost of the original MC until a property is satisfied. An upper bound on changes of probabilities or cost can be specified; correctness and completeness can be given in the sense that if a repair with respect to exists, it will be obtained.
Take now the MC which is induced by the robot MDP and the human strategy . We perform model repair such that the repaired MC satisfies the specifications . The question is now, how from the repaired MC , the strategy can be extracted. More precisely, we need inducing exactly , , , when applied to MDP .
First, we need to make sure that the repaired MC is consistent with the original MDP such that a strategy with actually exists. Therefore, we define the maximal and minimal possible transition probabilities and that can occur in any induced MC of MDP :
(13) 
for all ; is defined analogously. Now, the repair is performed such that in the resulting MC for all it holds that
(14) 
While obtaining , model checking needs to be performed intermediately to check if the specifications are satisfied; once they are, the algorithm terminates. In fact, for each state , the probability of satisfaction is computed. We assign variables for all with exactly this probability:
(15) 
Now recall the NLP from the previous section, in particular Equation 8 which is the only nonlinear equation of the program. We replace each variable by the concrete model checking result for each :
(16) 
As (16) is affine in the variables , the program resulting from replacing (8) by (16) is a linear program (LP). Moreover, (2) and (3) can be removed, reducing the number of constraints and variables. The LP gives a feasible solution to the shared control synthesis problem. [Correctness] The LP is sound in the sense that each minimizing assignment induces a solution to the shared control problem. The correctness is given by construction, as the specifications are satisfied for the blended strategy which is derived from the repaired MC. However, the minimal deviation from the human strategy as in Equation 1 is dependent on the previous computation of probabilities for the blended strategy. Therefore, we actually compute an upper bound on the optimal solution. Let be the minimal deviation possible for any given problem and be the minimal deviation obtained by the LP resulting from replacing (8) by (16). Let and denote the infinity norms of both perturbations. For the perturbations and of it holds that . As we mentioned before, the local repair method can employ a bound on the maximal change of probabilities or cost in the model. If a repair exists for a given , the resulting deviation is then bounded by this .
Vi Case study and experiments
Defining a formal synthesis approach to the shared control scenario requires a precomputed estimation of a human user’s intentions. As explained in the previous chapter, we account for inherent uncertainties by using a
randomized strategy over possible actions to take. We discuss how such strategies may be obtained and report on benchmark results.Via Experimental setting
Our setting is the wheelchair scenario from Example IV inside an interactive Python environment. The size of the grid is variable and an arbitrary number of stationary and randomly moving obstacles (the vacuum cleaner) can be defined. An agent (the wheelchair) is moved according to predefined (randomized) strategies or interactively by a human user.
From this scenario, an MDP with states corresponding to the position of the agent and the obstacles is generated. Actions induce position changes of the agent. The safety specification ensures that the agent reaches a target cell without crashing into an obstacle with a certain high probability , formally . We use the probabilistic model checker PRISM [15] for verification, in form of either a worst–case analysis for each possible strategy or concretely for a specific strategy. The whole toolchain integrates the simulation environment with the approaches described in the previous sections. We use the NLP solver IPOPT [4] and the LP solver Gurobi [12]. To perform model repair for strategies, see Section V, we implemented the greedy method from [17] into our framework augmented by side constraints ensuring welldefined strategies.
ViB Data collection
We ask five participants to perform tests in the environment with the goal to move the agent to a target cell while never being in same cell as the moving obstacle. From the data obtained from each participant, an individual randomized human strategy for this participant can be obtained via Maximum Entropy Inverse Reinforcement Learning (MEIRL) [22]. Inverse reinforcement learning has—for instance—also been used in [14] to collect data about human behavior in a shared control scenario (though without any formal guarantees) or in [18] to distinguish human intents with respect to different tasks. In our setting, each sample is one particular command of the participant, while we have to assume that command is actually made with the intent to satisfy the specification of safely reaching a target cell. For the resulting strategy, the probability of a possible deviation from the actual intend can be bounded with respect to the number of samples using Hoeffding’s inequality, see [21] for details. On the other hand, we can determine the number of samples needed to get a reasonable approximation of typical behavior.
The concrete probabilities of possible deviation depend on , where is the number of samples and is the desired upper bound on the deviation between the true probability of satisfying the specification and the average obtained by the sampled data. Here, in order to ensure an upper bound with probability , the required amount of samples is .
ViC Experiments
The work flow of the experiments is depicted in Figure 4. First off, we discuss sample data for one particular participant using a grid with one moving obstacle inducing an MDP of states. In the synthesis, we employ the model repair procedure as explained in Section V because the approach based on NLP is only feasible for very small examples. We design the blending function as follows: At states where the human strategy induces a high probability of crashing, we put low confidence in the human and vice versa. Using this function, the autonomous strategy is created and passed (together with the function) back to the environment. Note that the blended strategy is ensured to satisfy the specification, see Lemma V. Now, we let the same participant as before do test runs, but this time we blend the human commands with the (randomized) commands of the autonomous strategy . Then the actual action of the agent is determined stochastically. We obtain the following results. Our safety specification is instantiated with , ensuring that the target is safely reached with at least probability . The human strategy has probability , violating the specification. With the aforementioned blending function we compute which induces probability . Blending these two strategies into yields a probability of . When testing the synthesized autonomy protocol for the individual participant, we observe that his choices are mostly corrected if intentionally bad decisions are made. Also, simulating the blended strategy leeds to the expected result that the agent does not crash in roughly of the cases.
To make the behavior of the strategies more accessible, consider Figure 5. For each , , and we indicate for each cell of the grid the worstcase probability to safely reach the target. This probability depends on the current position of the obstacle, which is again probabilistic. The darker the color, the higher the probability; thereby black indicates a probability of to reach the target. We observe that the human’s decisions are rather risky even near the target, while for the blended strategy—once the agent is near the target—there is a very high probability of reaching it safely. This representation also shows that with our approach the blended strategy improves the human strategy while not changing it too much. Specifically, the maximal deviation from the human strategy is , which is the result of the infinity norm as in Equation 1.
To finally assess the scalability of our approach, consider Table II. We generated MDPs for several grid sizes, number of obstacles, and human strategies. We list the number of reachable MDP states (states) and the number of transitions (trans.). We report on the time the synthesis process took (synth.), which is basically the time of solving the LP, and the total time including model checking times using PRISM (total) measured in seconds. To give an indication on the quality of the synthesis, we list the deviation from the human strategy (). A memory out is indicated by “–MO–”. All experiments were conducted on a 2.3GHz machine with 8GB of RAM. Note that MDPs resulting from grid structures are very strongly connected, resulting in a large number of transitions. Thus, the encoding in the PRISMlanguage [15] is very large, rendering it a very hard problem. We observe that while the procedure is very efficient for models having a few thousand states and hundreds of thousands of transitions, its scalability is ultimately limited due to memory issues. In the future, we will utilize efficient symbolic data structures internal to PRISM. Moreover, we observe that for larger benchmarks the computation time is governed by the solving time of .
grid  obst.  states  trans.  synth.  total  

–MO–  –MO–  –MO–  
Vii Conclusion
We introduced a formal approach to synthesize autonomy protocols in a shared control setting with guarantees on quantitative safety and performance specifications. The practical usability of our approach was shown by means of databased experiments. Future work will concern experiments in robotic scenarios and further improvement of the scalability.
References

[1]
Pieter Abbeel and Andrew Y Ng.
Apprenticeship learning via inverse reinforcement learning.
In
Proceedings of the twentyfirst international conference on Machine learning
, page 1. ACM, 2004.  [2] Christel Baier and JoostPieter Katoen. Principles of Model Checking. The MIT Press, 2008.
 [3] Ezio Bartocci, Radu Grosu, Panagiotis Katsaros, CR Ramakrishnan, and Scott A Smolka. Model repair for probabilistic systems. In TACAS, volume 6605 of LNCS, pages 326–340. Springer, 2011.
 [4] Lorenz T. Biegler and Victor M. Zavala. Largescale nonlinear programming using IPOPT: An integrating framework for enterprisewide dynamic optimization. Computers & Chemical Engineering, 33(3):575–582, 2009.
 [5] Taolue Chen, Yuan Feng, David S. Rosenblum, and Guoxin Su. Perturbation analysis in verification of discretetime Markov chains. In CONCUR, volume 8704 of LNCS, pages 218–233. Springer, 2014.
 [6] Taolue Chen, Ernst Moritz Hahn, Tingting Han, Marta Kwiatkowska, Hongyang Qu, and Lijun Zhang. Model repair for Markov decision processes. In TASE, pages 85–92. IEEE CS, 2013.
 [7] Anca D. Dragan and Siddhartha S. Srinivasa. Formalizing assistive teleoperation. In Robotics: Science and Systems, 2012.
 [8] Anca D. Dragan and Siddhartha S. Srinivasa. A policyblending formalism for shared control. I. J. Robotic Res., 32(7):790–805, 2013.
 [9] Kousha Etessami, Marta Z. Kwiatkowska, Moshe Y. Vardi, and Mihalis Yannakakis. Multiobjective model checking of Markov decision processes. Logical Methods in Computer Science, 4(4), 2008.
 [10] Jie Fu and Ufuk Topcu. Synthesis of shared autonomy policies with temporal logic specifications. IEEE Trans. Automation Science and Engineering, 13(1):7–17, 2016.
 [11] F. Galán, M. Nuttin, E. Lew, P. W. Ferrez, G. Vanacker, J. Philips, and J. del R. Millán. A brainactuated wheelchair: Asynchronous and noninvasive braincomputer interfaces for continuous control of robots. Clinical Neurophysiology, 119(9):2159–2169, 2016/05/28.
 [12] Gurobi Optimization, Inc. Gurobi optimizer reference manual. http://www.gurobi.com, 2013.
 [13] Iñaki Iturrate, Jason Omedes, and Luis Montesano. Shared control of a robot using eegbased feedback signals. In Proceedings of the 2Nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication, MLIS ’13, pages 45–50, New York, NY, USA, 2013. ACM.
 [14] Shervin Javdani, J Andrew Bagnell, and Siddhartha Srinivasa. Shared autonomy via hindsight optimization. In Proceedings of Robotics: Science and Systems, 2015.
 [15] Marta Kwiatkowska, Gethin Norman, and David Parker. Prism 4.0: Verification of probabilistic realtime systems. In CAV, volume 6806 of LNCS, pages 585–591. Springer, 2011.
 [16] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, pages 663–670, 2000.
 [17] Shashank Pathak, Erika Ábrahám, Nils Jansen, Armando Tacchella, and JoostPieter Katoen. A greedy approach for the efficient repair of stochastic models. In NFM, volume 9058 of Lecture Notes in Computer Science, pages 295–309. Springer, 2015.
 [18] Constantin A Rothkopf and Dana H Ballard. Modular inverse reinforcement learning for visuomotor behavior. Biological cybernetics, 107(4):477–490, 2013.
 [19] Pete Trautman. Assistive planning in complex, dynamic environments: a probabilistic approach. CoRR, abs/1506.06784, 2015.
 [20] Pete Trautman. A unified approach to 3 basic challenges in shared autonomy. CoRR, abs/1508.01545, 2015.
 [21] Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010.
 [22] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. 2008.
Comments
There are no comments yet.