Reality as a simulation of reality: robot illusions, fundamental limits, and a physical demonstration

09/09/2019 ∙ by Dylan A. Shell, et al. ∙ 0

We consider problems in which robots conspire to present a view of the world that differs from reality. The inquiry is motivated by the problem of validating robot behavior physically despite there being a discrepancy between the robots we have at hand and those we wish to study, or the environment for testing that is available versus that which is desired, or other potential mismatches in this vein. After formulating the concept of a convincing illusion, essentially a notion of system simulation that takes place in the real world, we examine the implications of this type of simulability in terms of infrastructure requirements. Time is one important resource: some robots may be able to simulate some others but, perhaps, only at a rate that is slower than real-time. This difference gives a way of relating the simulating and the simulated systems in a form that is relative. We establish some theorems, including one with the flavor of an impossibility result, and providing several examples throughout. Finally, we present data from a simple multi-robot experiment based on this theory, with a robot navigating amid an unbounded field of obstacles.



There are no comments yet.


page 1

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Motivation and Overview

Robotics papers usually include evidence of algorithms or controllers that have been executed or evaluated on some kind of system, typically comprising either physical robots or a substitute. But what constitutes a robot demonstration, exactly? One division is generally drawn between software simulation and real robots. This is, at best, a rather rough distinction for there is a spectrum of simulators spanning a wide range of fidelities. Actually, the same might be said for physical robots: data and conclusions from robots using, say, a sophisticated external motion capture system, or cameras with visual markers, might be representative of robots operating in the field with GPS. Or, on the other hand, depending on what you’re trying to do, they might not.

What is certain is that there are more choices, between full software simulation and full physical implementation, than are generally recognized or garner attention (see Figure 1). Inasmuch as this is critical for robotics as a scientific enterprise, it is perhaps curious that there has been little formal treatment of representativeness or verisimilitude beyond the complete hardware and software extremes, and their consideration. This paper’s raison d’être is to initiate a close, systematic examination of these other options.

Fig. 1: An illustration of different modes of fakery matched with appropriate intellectual positions: the orthodox view (left) and two forms of philosophical skepticism (center and right). The left column represents an end-to-end physical robot experiment with every part being real. The middle column, bearing the bust of David Hume, involves real sensors but also reason to doubt the veracity of the world they report. The rightmost column, headed by Bishop Berkeley, has a real controller but all other elements are mere software—it represents a robot experiment conducted in simulation (for the commonplace use of that term) where information is injected directly into the robot’s cortex. Hume’s column is the case studied in the paper: robots perceive a world mediated by sensors that are grounded in the real world, but it is a world of “nothing but sophistry and illusion.”

We want to understand how one physical system may be used to mimic the behavior of another. By system, we are considering a setting where observations are made (via sensors) and used to choose actions that are effected (via actuators) and this unfolds over time. We begin with a simplified discrete-time setting (Definition 1) where we can contemplate exact emulation (Definition 2), rather than considering approximate or imprecise imitation. The central features which distinguish the approach from other formalisms of emulation between robot systems (see Section II) are the possibility of variable time expansion (somewhat akin to Milner’s weak bisimulation [21]) and a narrow focus on mimickry only up to the perceptual capabilities of the system under emulation.

We then formulate some particular questions, such as: “What are the resources involved, how do we quantify resource requirements, and relate them?” (Definition 3, Theorem 4), “How do we compose or nest such systems?” (Theorem 2), “What happens to these things when systems are modified (Theorem 3)”, etc. Even this simple setting is replete with possibilities, some of which are both exciting and enticing (Section VII).

In terms of immediate utility for the practitioner, the present paper shows how to conduct a novel sort of emulation with real hardware where sensors, rather than being faked out of whole cloth—as is usually done with computational or mathematical models that are highly idealized coarse approximations—provide real signals. As the instances we study herein show, there may be considerable freedom in choosing different ways to emulate one system with another, with implications for future robotic laboratory infrastructure.

Ii Related work

Ii-a Animal studies: The inspiration for the present work

For decades, biologists have sought to chart the perceptual limits of organisms and to understand how informational mismatches affect behavior [39, 9]. Recent years have seen virtual and augmented reality technologies being used in this quest with considerable enthusiasm. The animals studied range from small mammals [12] down to insects [36], being studied both while walking [35] and flying [10, 11, 18]. The journal Current Zoology recently devoted a special issue to the topic [40]. As a concrete example, Takalo et al. constructed a laboratory apparatus comprising a spherical projection surface and a track ball that enables the detailed study of the walking behavior of the cockroach (periplaneta americana) by providing it with synthetic visual stimuli, ultimately to give a systems-level understanding of the organism [35].

Ii-B Practical simulation in software

Software simulations are an inescapable part of the current robotics research landscape, with the community devoting much time and attention to related questions, including through the biennial simpar conference. The software traces out some element of a robot’s execution in a virtual, rather than physical world, generating artificial sensor readings (or sometimes state information), and evolving the robot system forward in time.

Center to most discussions about software simulation are considerations of fidelity: How closely does the simulator mimic the real world? High-fidelity simulation software like Gazebo [14] has been developed to account for many of the complications experienced by the complex robots native to many research labs. But fidelity may be traded for other features as some efforts strike a “useful balance between fidelity and abstraction” [38]. Other simulators, designed for specific robot types [13, 6, 30, 27, 7, 32], optimization/control schemes [37], and application domains [2, 23, 31], exist.

This work is partly a generalization of the traditional notion of robot simulation, but with elements of the simulation conducted physically rather than virtually. Closely related work includes endeavors that alter aspects of the physical world using mixed or augmented reality techniques [1, 33, 4, 3]. The distinguishing here is that modifications of the world are made by robots and for robots, not human developers or operators, and not via additional display technologies.

Ii-C Simulation as theoretical concept

Relating systems by the fact that they can simulate each other, for some definition of simulation, is a recurring theoretical theme. The symmetric notion, where two systems are each able to match the other, yields the concept of bisimulation, which is an equivalence relation. Bisimilarity was identified independently in modeling concurrent systems [20] and in modal logic [26]; it also has a game theoretic interpretation [34].

Closer to home in robotics, invariants among sensori-computation circuits of Donald [8], and the dominance relation between robot systems introduced by O’Kane and LaValle [25], bear parallels to the notion of illusion we introduce here, particularly in the use of one system, or re-arrangements of the resources contained therein, to emulate certain properties of another. In this paper, the emphasis on perceptual equivalence for the robots participating in the illusion is fresh.

Iii Preliminary definitions

Iii-a Systems

We wish to talk about relationships between pairs of systems of robots. First, then, we need to define the notion of a system. Because henceforward we shall consider systems consisting of possibly many robots, we jump directly into definitions that consider (potentially) multiple robots. Superscripts in parentheses denote robot indices; subscripts are time indices.

Definition 1.

A deterministic multi-robot transition system is a -tuple , in which

  1. is a positive integer identifying the number of robots,

  2. denotes a state space, composed of individual state spaces for each robot,

  3. denotes an action space, composed of individual action spaces for each robot,

  4. is a state transition function, defined in terms of transition functions for each robot, so that

  5. denotes an observation space, composed of individual observation spaces,

  6. is an observation function, defined in terms of observation functions for each robot, so that ,

  7. is the system’s initial state.

Such a system evolves, in a series of discrete time steps, subject to the following pair of equations:

Fig. 2: An example of the sort of system in Example 1, with . At each time step , each robot is at some point along the roadway moving with velocity , and measures the distances to the adjacent robots.

A few simple examples, to be revisited later, illustrate the idea.

Example 1.

Consider a caravan of autonomous vehicles —that is, robots— moving down a long single-lane roadway. Suppose each robot can control its own velocity, subject to some upper and lower bounds, and can also measure the distance to the other robots immediately in front of and immediately behind itself. See Figure 2. We might describe this scenario as a deterministic multi-robot transition system


for which we’ll give the state transition function and observation shortly. Here elements of the state space encode the position, along the one-dimensional roadway, of each of the robots. At each time step , the action of robot denotes the velocity of that robot at that time. Thus, we may define . We assume that . Each observation is a pair of integers indicating the distance to the closest other robot, if any, in each direction:

To refer to the individual measurements in a single observation, we use the symbols and for the distances behind and ahead, so that . Finally, the initial state is some known but arbitrary state.

Notice that (1) is, in fact, defining an infinite family of systems, parameterized by the number of vehicles in the system and the ranges of allowable velocities.

This is, of course, a heavily idealized model of caravaning autonomous vehicles, crafted as an elementary illustration of Definition 1. Richer models might, for example, expand to model multi-lane roadways or the robots’ lateral positions within the lanes, enrich and to model the dynamics of some physical system more faithfully, or modify and to model, say, a lidar sensor with greater fidelity.

Example 2.

Consider a system in which many small disk-shaped differential drive robots move in a shared, bounded, planar workspace, with each robot aware of the relative positions of the other robots within some small sensor range. Refer to Figure 3[left]. One might realize this kind of system using, for example, Khepera [22], r-one [19], or GRITSbot [29] robots. We can model such a system by choosing the number of robots , the rectangular workspace , the maximum wheel velocity , and the sensor range . We then define


in which the states in , the actions in denote the left and right wheel velocities for each robots, the state transition function encodes the well-known kinematics for differential drive robots, the observations in are lists of between and planar positions, the observation function for each robot returns a list of the relative positions of any other robots within distance of robot , and the initial state is a known but arbitrary state.

Example 3.

Definition 1 is also suitable for describing single-robot systems as a particular case with . For example, a velocity-controlled robot moving in a very large field of nearly-identical static obstacles, with a sensor to detect those obstacles when they are nearby, might be modelled as


with , , and . The observation space and may be defined to return the locations of the center points of each obstacle. See Figure 3[right].

Fig. 3: [left] A team of simple robots in a bounded environment, as in Example 2. [right] A single robot moving in a unbounded field of obstacles, as in Example 3.

Iii-B Policies

In the model, a robot operates by choosing actions to execute, a concept detailed via a policy. The essential question in formalizing policies is to determine what information is used by the robot in considering its action. Now, to define the policy concept, we adopt the style of LaValle’s book [17].

We begin, first, with something simple that will turn out to be inadequate for our needs. If robot , at time step , has sufficient information that it can determine its state, i.e., it is a fully observable problem, then its policy might be defined as a function of that state:


More likely, the robot will only have access to its history of actions and observations to select its action


In what follows, one robot system will seek to present some view of the world to match a description as will be seen by some other, secondary system. This primary system must know some aspects of that other system to fool it effectively. That is, the primary system must be aware of the ‘fourth wall’ and know some of the expectations and qualities on the other side of it. Throughout, we use a notational convention: we distinguish the primary system (initially best thought of as the physical system) by placing a hat over its variables; all variables for the secondary are bare. Now, returning to our formalization of the policy concept, we must generalize the notation so far in order for it to present information about the primary system and a secondary one, partitioned like such


Note that the hatted variables in the domain are labelled from to , while the naked variables extend to . This models the fact that the primary and second systems may operate at different time scales. Immediately, one sees other variations that are possible, such as instances when uses only the last element () of the secondary system’s state. Or, when the primary robots may communicate, the superscripts may be dropped when we consider the multi-robot system globally. For simplicity, we restrict our attention in this paper only to the basic case. In what follows, the term robot policy refers to a function of the form in (6).

Iii-C Illusions

Definition 2.

For deterministic multi-robot transition systems , and , and integer , we say that is an -illusion of if there exist

  1. robot policies in ,

  2. a strictly increasing function , and

  3. an infinite series of functions ,

for any robot policies in , such that for all and all , we have

Further, if is an -illusion of , then a tuple of robot policies, mapping functions, and a time scaling function that ratifies the definition of illusion is called a witness to that illusion.

The preceding definition warrants some dissection.

  • We understand the system to be the secondary one, i.e., the one that we intend to emulate. The system is the physical system whose execution will be orchestrated to appear, in the perception of some of its robots, to operate in the same manner as .111Occasionally human illusionists opt for for a certain type of stereotypical headwear (). Likewise, our convention uses notation with hats () to refer to systems whose robots are performing an illusion. The parallel is unintentional but perhaps nonetheless a useful aid to understanding.

  • The positive integer parameter is the number of robots in that are recipients of the illusion, whom we dub the participant robots. To simplify the notation, we will assume without loss of generality that the first robots in , according to their indices, are the participants. (One might also expect for always, as it seems that the number of participant robots cannot exceed the number of robots in the system; in fact, this need not be so, cf. Example LABEL:ex:loads_of_bumper.)

  • The robot policies described in condition (i) govern the movements of the robots in that system.

  • The function from condition (ii) establishes the relationship between the time scales of the two systems, so that defines the physical time step in corresponding to time step in .

  • The functions from condition (iii) indicate, for each time step of the execution in , which robots of play the roles of each of the participant robots in .

Pulling these elements together, the constraint marked (2) requires, at each time step in , that every participant robot is mapped, via the function for that time step, to a robot in that experiences the same observation in that system as the mapped robot should experience in . A few examples follow.

Example 4.

Recall the autonomous caravan systems introduced in Example 1. For any such system , we can form a -illusion from any system of the form . This holds regardless of the number of robots in and of the range of actions available to each robot in .

One way to construct such an illusion is to select a policy in which robot 1 moves at a constant speed . The other two robots, knowing the desired observation from , position themselves on opposite sides of robot 1, moving as fast as possible at each stage in toward positions where and . To satisfy the remaining conditions of Definition 2, define to return the time when robots 2 and 3 in have reached their target positions, and the sequence of mapping functions as a constant series of functions, under which for all . See Figure 4.

Fig. 4: An illustration of Example 4. A system of three vehicles reproduces the observations expected in a system with potentially many more robots.
Example 5.

Recall the system introduced in Example 3. Suppose there exists an upper bound on the number of obstacles visible from —that is, within distance of— any position that the robot might reach. Then , from Example 2, is a -illusion for , provided that it has at least robots, its workspace is large enough to contain a circle of radius , and the sensing range in is no smaller than the sensing range in .

One way to achieve this illusion is to select robot 1 in to act as the recipient of the observations as required by (2). This robot remains motionless at the center of the physical workspace . At each stage in , the desired observation is a list of positions at which robot 1 should perceive obstacles. We choose a policy that directs the some of the remaining robots to those positions relative to robot 1, and directs the remaining robots to positions beyond its sensing range. See Figure 5. Many different policies, with varying degrees of time efficiency, can achieve this.

Fig. 5: A -illusion of using .

Next, we consider the execution time in the primary system as a resource cost in which we are interested.

Definition 3.

If is an -illusion of with witness , then the illusion is an -illusion if the sequence

is bounded above by . The constant , which we can take to be an integer owing to the definition of , is called the slowdown of the illusion.

In broad terms, we may then consider , the inverse slowdown, to be time efficiency of an illusion.

Example 6.

Recall Example 4. That illusion has slowdown .

Iv Basic Properties of Illusions

Definition 2 provides a foundation for understanding the notion of one system presenting an illusion of another. Next, we present some results that follow from that definition. As an initial sanity check, we show that a system does indeed present a faithful and efficient illusion of itself.

Theorem 1 (identity).

A deterministic multi-robot transition system is an (,)-illusion of .


We observe that, if and are taken as identity functions, then (2) holds when . ∎

Considering the preceding theorem, one might wonder whether a stronger statement ought to be made, to the effect that every can provide an -illusion of itself for any . That statement is absent because it is false. Supposing with , then there are robots that may show up under . Additional properties of are needed to ensure that the robots can be made invisible.

With additional assumptions on the dynamics of , i.e., if the system can be made to either loiter or affect state changes more slowly, then an (,)-illusion with is also possible.

Rather more interesting is the nesting of systems: one system presenting an illusion to another, that it itself presenting an illusion to a third.

Theorem 2 (composition).

If , is an -illusion of , and is an -illusion of , then is an -illusion of .


Assume is a witness for ’s illusion for , and is a witness for ’s illusion for . To show to be an -illusion of :

  1. take policies in because, since they must suffice for any policies in , they must suffice for in particular;

  2. take the function is an increasing function, from to , being the composition of two such functions; and

  3. the infinite series of functions , with .

The definition of means that, for all , , for some . Since is increasing, But, since , telescope the series: . ∎

Note that, in (iii), function composition requires that be an -illusion of in order for the types to agree. If were only an -illusion of with , then the extra robots are needed to create an illusion for . This arises because we do not talk of some subset of robots in one system sufficing to provide an illusion of another system, since all the primary robots need to participate to ensure the illusion succeeds, even if participating constitutes moving to ensure they’re unobserved, ruining the illusion otherwise.

Illusions hold up to the set of observations made in the secondary system. One might expect that but, in fact, may be larger or smaller, though the pair cannot be disjoint. It is not the range but the image which matters:

Definition 4.

The perceptual occurrence of deterministic system , is the subset of , denoted , that is produced under via states reachable by some robot policies .

In Definition 2, requirement (2) implies that if is an illusion of , then .

Now we might inquire as to the implications for illusions under alteration of the robots’ sensors. We model potential degradation, or preimage coarsening, of sensors via a function in the observation space, where non-injective transformations will conflate things that were distinguishable formerly.

Theorem 3 (coarser observations).

If is an illusion of , then, for any function , we have that is an illusion of .222This theorem holds for a slightly broader, albeit more obscure, class of functions. One may take the disjoint union as the domain, , so long as there is agreement on the function restrictions up to perceptual occurrence in the secondary system, i.e., , .


The original witness ratifies the new illusion, since

in which the left equality needs to hold over only. ∎

It may seem, intuitively, that if ’s sensors are weakened, then that should only make illusionability more feasible. But for an illusion to be passable, the definition requires that it appear identical to , which thus prohibits the robot’s sensors from operating with implausibly high fidelity. We note that, though beyond the scope of this work, if one may alter the secondary robot system, then the story changes. One could apply computationally, degrading after the sensor’s signals ex post facto, by introducing a small software shim in the position indicated with the  in Figure 1.

V The limits of illusion

Why have two definitions (Definition 2 and 3) to separate -illusions from (, )-illusions? The next result establishes that pairs of systems exist where the primary system is sufficiently powerful to conjure an illusion of the second, but the gap in relative speeds has no limit. Put another way, for any execution in the one, the other can create a faithful illusion, but no bound exists on the illusion’s slowdown (i.e., there is no finite such that it is an -illusion). The result is that it is impossible for the primary system to present any illusion of the secondary system satisfying Definition 3.

Theorem 4 (Illusions with no bounded ).

There exist deterministic multi-robot transition systems and where the latter is an -illusion of the former, but for which no exists such that it is an -illusion.

Proof roadmap

We give constructions for both and , then show that is indeed a -illusion of (Lemma 1); and also, that any desired bound placed on the slowdown will be surpassed (Lemma 2).

Fig. 6: A visual representation of the two systems in Constructions 1 and 2. The green robot at the top of the figure is , the red robot below is . Both measure the environment with , yielding a sensor whose preimage information is diagrammed in blue.
Construction 1 ().

We define the following deterministic multi-robot transition system

where , and, dubbed squeeze,

This describes a robot that lives on the positive -axis and which moves along in discrete steps, each with size units. This is shown as the green robot in the top diagram in Figure 6. The robot is equipped with a stylized range sensor that measures a quantized distance to an obstacle at the origin (the blue information in the diagram). The sensor’s precision increases (geometrically) with increasing , with readings outside stripes of increasing precision return a generic reading, . (The sensor’s behavior here is essentially arbitrary for the construction, the symbol emphasizes its insignificance.)

Construction 2 ().

Next, consider deterministic multi-robot transition system

where and are as in the preceding construction.

This robot also lives on the positive -axis and moves in steps. It has rather more options for its movement, it moves in either direction with steps that are negative powers of two. This is shown as the red robot in the bottom part of Figure 6, where the arrows show ‘hops’ of length , these being a sample of some actions available to the robot.

Lemma 1.

is a -illusion of .


Function is determined because the systems have only one robot each. Since the systems share the same state space and observation function, the approach is to have the robot in navigate toward the position that the robot in would appear in. Given the other system’s previous state and the action it wishes executed , the hatted robot computes the target position . It then compares this with its current position (computed from, , integrating forward from ). If the positions are equal, which can happen at integral positions, there is nothing that needs doing and the function causes to continue. Otherwise, the comparison indicates whether the movement will be in the positive direction, or the reverse—which involves selecting the appropriate sign. Next, enumerate until a step size is found that is sufficiently small to ensure the robot will not overshoot the target position. The hatted robot then takes this action. If the resulting position is still more than units away from , this last step is repeated: first computing the largest step size that doesn’t jump over the target, then taking that step. This process converges on , and terminates when close enough.

Generally, the hatted robot takes multiple steps to get into a position close enough to appear in the right region under . These multiple steps are plateaus in . With more precision being needed the further the robots are from the origin, the number of steps in the plateaus will depend on the coordinate. Nevertheless, for any position the robot wishes to occupy, there are a finite number of steps that the one needs to take as the target region is an interval with distinct endpoints and, therefore, contains some finite binary fraction. ∎

Lemma 2.

For any -illusion of by , and any finite , the constant policy for the robot in implies that some exists where


Suppose a -illusion of by is given. Now consider the constant policy with the robot in moving to the right. For every time , the robot in wishes to have reached reached state . Consider a time for some . At that point, the robot in must be in , that is, the preimage corresponding to the observation to be seen by the robot in . But that means that , or . This means that, at time , if the states of robots in the respective systems are written in binary form, they will certainly agree up to the first digits after the point.

At time step , the binary representation of the state is , where the bits to the left of the point represent . At the next time step, the state is . At step , agrees on the first digits. The robot in must move to a that will have to agree on at least the first digits—but those bits have all flipped. The motion model of system permits it to to add or subtract numbers that, when expressed in binary, comprise only a single bit to the right of the point. This is the only way it permits state to change, no matter the mechanism employed by the illusion. Either addition or subtraction of such numbers can trigger an effect of altering a chain of bits through the carry mechanism (either a string of s for addition, or a string of s for subtraction). Because we start with s and s alternating in the first digits, an amortized analysis shows that even those steps which seem to trigger long bit changes, must have been paid for before to set them up. (See, for example, Section 17.1 of Cormen et al. [5], for details of this particular amortized analysis.) The most efficient means to flip the bits of takes at least steps. As time evolves, increases and the robot in moves steadily to the right, but the steps needed by the robot to maintain a plausible illusion between times and is not constant but costs at least , i.e., . Taking thus ensures that the condition is met. ∎


The preceding two Lemmas prove Theorem 4. ∎

Vi Physical demonstration in the Robotarium

As a proof-of-concept, we implemented the illusion described in Example 5 both in simulation and on a physical robot testbed. Simulations were conducted using an implementation in Python; physical experiments were conducted in the Robotarium [28]. Figure 7 shows a snapshot of the execution. Refer also to the supplemental video.

Fig. 7: A collection of 10 robots performing the illusion of Example 5. Robot 1 (center) remains motionless. Eight other robots play the roles of eight nearby obstacles. A tenth robot (far right) remains out of view.

Note that Example 5 calls for the complicit robots to assume certain positions, but does not prescribe which robots should take which roles. We implemented three distinct strategies:

  1. A naïve matching strategy, in which robots are assigned to roles from left to right, in order of their indices.

  2. The Hungarian algorithm [15, 24] for optimal task assignment, wherein some robots are assigned to obstacle roles and the remaining robots travel to the nearest location outside of the sensor range. The matching is selected to minimize the total travel time.

  3. An enhancement of the Hungarian strategy with a heuristic that directs the offstage robots to the locations of the nearest obstacles that are not yet visible.

One might expect, in this context, that the time efficiency of the illusion might be impacted both by the number of robots employed in the physical system and by the policy used in that system to carry out the illusion. To test this hypothesis, we performed a series of simulations of the policies described above. We conducted 10 trials, each using a distinct randomly-generated path for the robot in . For each, we executed each of the three illusions described above and measured the amount of real time in needed to execute the policy in .

Several notable trends appear in the results, which are shown in Figure 8. Most plainly, the relative efficiency between the three algorithms matches what one might expect: Better use of more information leads to a more time-efficient illusion. For the two methods based on Hungarian matching, opposite trends appear as the number of robots increases: the basic Hungarian approach loses efficiency as robots are added, presumably due to interference from avoiding collisions between the robots. In contrast, the heursitic that positions robots near locations where new obstacles are likely to appear in the future is better able to take advantage of additional robots waiting ‘in the wings’ to take on roles when needed, leading to improvements in efficiency as the number of robots increases.

naïveHungarianHungarian + heuristic0100020003000400050006000700080008101214161820222426Number of robots ()
Fig. 8: Simulation results showing the impact of the number of robots and the policy on time efficiency.

Vii Outlook and conclusions: So now what?

There can be an immense variety of very different means to realize the same illusion. The single lesson that emerges most clearly from our demonstration implementation —both the more thorough simulation trials and the physical instance on the robotarium, where a time cut-off is imposed— is that distinct approaches may have time efficiencies that differ considerably. Even the more efficient curve in Figure 8 has a slowdown factor of about 9, which is likely an impediment when producing an illusion of robots that one has direct access to. But consider an illusion for the system in Example 3 where the field of obstacles is unbounded: it simply can’t be achieved physically. Moreover, if the

-illusion has both the participant and the obstacles moving, it is possible to present an illusion for a robot that is faster than any we own. Judging the value of the idea by an early implementation is probably unwise, though an order of magnitude gap is not always fatal (compare, for instance, software simulation of

vlsi circuits versus hardware).

Several research directions remain, some more pressing than others. We lead with those we deem most important:

  • Extensions to address uncertainty and non-determinism would be most valuable. Some basic questions are still unresolved: is permitted structure interactions so that only some of the outcomes arise, or must all be possible? If a probabilistic perspective is adopted, do illusions have to present events with representative statistics?

  • A richer theory of efficient illusions is needed to empower reasoning about resource trade-offs with regards to illusions. For instance, Theorem 3 says nothing about efficiency. Can sensor preimage coarsening reduce the slowdown? How can one understand the trade-offs between actuator capabilities and the illusions that can be produced?

  • Another weaker condition for the ability to produce illusions might impose a notion of distance (or at least some topology) on the observation space so that almost-illusions or probably-approximately-illusions might be formalized. If cannot be produced via , we might settle for less: the that is ‘closest’ to .

  • Though this paper has not addressed it head-on, some modeling considerations can be subtle. For example, whether velocity, or other aspects tied to physical time, are part of the state space, , or not is tricky. This is, at least somewhat, partly anticipated in [16].

  • How to best model a system producing two illusions simultaneously? This would allow one to develop a notion of multiprogramming for timeshared physical robot resources, like the Robotarium. Scheduling need not occur at the level of whole experiments, instead robots are more like virtual memory, where more fine-grained concurrency is possible.

  • Can one consider, systematically, what is gained by having greater influence over the robot? The software shim mentioned at the very end of Section 3 is but one instance. Another alluring possibility, if we can permit per-robot delays in receipt of sensor signals, or even caching of them, is to weaken the requirement of temporal linearity. Doing so could lead to a sort of ‘out-of-order emulation’ for robots and a concomitant acceleration of the execution.


  • [1] P. Amstutz and A. H. Fagg (2002-05) Real time visualization of robot state with mobile virtual reality. In Proceedings of the IEEE International Conference on Robotics and Automation, Washington, DC., USA. Cited by: §II-B.
  • [2] S. Carpin, M. Lewis, J. Wang, S. Balakirsky, and C. Scrapper (2007) USARSim: a robot simulator for research and education. In Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1400–1405. Cited by: §II-B.
  • [3] I. Y. Chen, B. Macdonald, and B. Wünsche (2009-05) Mixed reality simulation for mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan. Cited by: §II-B.
  • [4] T. H. J. Collett and B. Macdonald (2006-05) Augmented reality visualisation for player. In Proceedings of the IEEE International Conference on Robotics and Automation, Orlando, FL., USA. Cited by: §II-B.
  • [5] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein (2001) Introduction to algorithms. third edition, MIT Press. Cited by: §V.
  • [6] J. Craighead, R. Murphy, J. Burke, and B. Goldiez (2007) A survey of commercial & open source unmanned vehicle simulators. In Proceedings of the IEEE International Conference on Robotics and Automation, pp. 852–857. Cited by: §II-B.
  • [7] O. M. Curet, I. K. AlAli, M. A. MacIver, and N. A. Patankar (2010) A versatile implicit iterative approach for fully resolved simulation of self-propulsion. Computer Methods in Applied Mechanics and Engineering 199 (37-40), pp. 2417–2424. Cited by: §II-B.
  • [8] B. R. Donald (1995-01) On Information Invariants in Robotics. Artificial Intelligence — Special Volume on Computational Research on Interaction and Agency, Part 1 72 (1–2), pp. 217–304. Cited by: §II-C.
  • [9] D. B. Dusenbery (1992) Sensory Ecology: How Organisms Acquire and Respond to Information. W. H. Freeman. Cited by: §II-A.
  • [10] S. N. Fry, P. Müller, H. Baumann, A. D. Straw, M. Bichsel, and D. Robert (2004-05) Context-dependent stimulus presentation to freely moving animals in 3D. Journal of Neuroscience Methods 135 (1), pp. 149–157. External Links: Document Cited by: §II-A.
  • [11] S. N. Fry, N. Rohrseitz, A. D. Straw, and M. H. Dickinson (2008-06) TrackFly: virtual reality for a behavioral system analysis in free-flying fruit flies. Journal of Neuroscience Methods 171 (1), pp. 110–117. External Links: Document Cited by: §II-A.
  • [12] C. Hölscher, A. Schnee, H. Dahmen, L. Setia, and H. A. Mallot (2005-01) Rats are able to navigate in virtual environments. Journal of Experimental Biology 208, pp. 561–569. External Links: Document Cited by: §II-A.
  • [13] N. Jakobi (1997) Evolutionary Robotics and the Radical Envelope-of-Noise Hypothesis. Adaptive Behavior 6 (2), pp. 325–368. Cited by: §II-B.
  • [14] N. P. Koenig and A. Howard (2004) Design and use paradigms for gazebo, an open-source multi-robot simulator. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 4, pp. 2149–2154. Cited by: §II-B.
  • [15] H. W. Kuhn (1955) The Hungarian Method for the Assignment Problem. Naval Research Logistic Quarterly 2, pp. 83–97. Cited by: item (ii).
  • [16] S. M. LaValle and M. B. Egerstedt (2007) On Time: Clocks, Chronometers, and Open-Loop Control. In Proceedings of the IEEE International Conference on Decision and Control, pp. 1916–1922. Cited by: 4th item.
  • [17] S. M. LaValle (2006) Planning algorithms. Cambridge University Press, Cambridge, U.K.. Note: Available at Cited by: §III-B.
  • [18] T. Luu, A. Cheung, D. Ball, and M. V. Srinivasan (2011-06) Honeybee flight: a novel ‘streamlining’ response. Journal of Experimental Biology 211, pp. 2215–2225. External Links: Document Cited by: §II-A.
  • [19] J. McLurkin, J. Rykowski, M. John, Q. Kaseman, and A. J. Lynch (2013) Using multi-robot systems for engineering education: Teaching and outreach with large numbers of an advanced, low-cost robot. IEEE Transactions on Education 56 (1), pp. 24–33. Cited by: Example 2.
  • [20] R. Milner (1980) A calculus of communicating systems. Lecture Notes in Comput. Sci. 92. Cited by: §II-C.
  • [21] R. Milner (1982) Four combinators for concurrency. In Proceedings of the first ACM SIGACT-SIGOPS symposium on Principles of distributed computing, pp. 104–110. Cited by: §I.
  • [22] F. Mondada, E. Franzi, and A. Guignard (1999) The development of khepera. In Proceedings of the First International Khepera Workshop, pp. 7–14. Cited by: Example 2.
  • [23] F. Mondada, G. C. Pettinaro, A. Guignard, I. W. Kwee, D. Floreano, J. Deneubourg, S. Nolfi, L. M. Gambardella, and M. Dorigo (2004) SWARM-bot: a new distributed robotic concept. Autonomous robots 17 (2-3), pp. 193–221. Cited by: §II-B.
  • [24] J. Munkres (1957-03) Algorithms for the Assignment and Transportation Problems. Journal of the Soc. for Industrial and Applied Math. 5 (1), pp. 32–38. Cited by: item (ii).
  • [25] J. M. O’Kane and S. M. LaValle (2008-01) On Comparing the Power of Robots. International Journal of Robotics Research 27 (1), pp. 5–23. External Links: Document Cited by: §II-C.
  • [26] D. Park (1981) Concurrency and automata on infinite sequences. In Theoretical Computer Science, pp. 167–183. Cited by: §II-C.
  • [27] D. Pekarek and T. D. Murphey (2011) A backwards error analysis approach for simulation and control of nonsmooth mechanical systems. In Proceedings of the IEEE International Conference on Decision and Control, pp. 6942–6949. Cited by: §II-B.
  • [28] D. Pickem, P. Glotfelter, L. Wang, M. Mote, A. Ames, E. Feron, and M. Egerstedt (2017) The Robotarium: A remotely accessible swarm robotics research testbed. In Proceedings of the IEEE International Conference on Robotics and Automation, Cited by: §VI.
  • [29] D. Pickem, M. Lee, and M. Egerstedt (2015) The GRITSBot in its natural habitat: A multi-robot testbed. In Proceedings of the IEEE International Conference on Robotics and Automation, Cited by: Example 2.
  • [30] C. Pinciroli, V. Trianni, R. O’Grady, G. Pini, A. Brutschy, M. Brambilla, N. Mathews, E. Ferrante, G. Di Caro, F. Ducatelle, et al. (2011) ARGoS: a modular, multi-engine simulator for heterogeneous swarm robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5027–5034. Cited by: §II-B.
  • [31] A. Rodríguez, L. Basanez, J. E. Colgate, and E. L. Faulring (2010) A framework for the simulation and haptic display of dynamic systems subject to holonomic constraints. International Journal of Robotics Research 29 (4), pp. 336–352. Cited by: §II-B.
  • [32] M. C. Stanley and J. E. Colgate (1992) Computer simulation of interacting dynamic mechanical systems using distributed memory parallel processors. Advances in Robotics 42, pp. 55–61. Cited by: §II-B.
  • [33] M. Stilman, P. Michel, J. Chestnutt, K. Nishiwaki, S. Kagami, and J. Kuffner (2005) Augmented reality for robot development and experimentation. Technical report Technical Report CMU-RI-TR-05-55, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. Cited by: §II-B.
  • [34] C. Stirling (1995) Local model checking games. In Proceedings of the International Conference on Concurrency Theory (CONCUR’95), pp. 1–11. Cited by: §II-C.
  • [35] J. Takalo, A. Piironen, A. Honkanen, M. Lempeä, M. Aikio, T. Tuukkanen, and M. Vähäsöyrinki (2012-03) A fast and flexible panoramic virtual reality system for behavioural and electrophysiological experiments. Scientific Reports 2 (324). External Links: Document Cited by: §II-A.
  • [36] G. K. Taylor, M. Bacic, R. J. Bomphrey, A. C. Carruthers, J. Gillies, S. M. Walker, and A. L. R. Thomas (2008-01) New experimental approaches to the biology of flight control systems. Journal of Experimental Biology 221, pp. 258–266. External Links: Document Cited by: §II-A.
  • [37] E. Todorov, T. Erez, and Y. Tassa (2012) MuJoCo: a physics engine for model-based control. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §II-B.
  • [38] R. Vaughan (2008) Massively multi-robot simulation in stage. Swarm intelligence 2 (2-4), pp. 189–208. Cited by: §II-B.
  • [39] T. H. Waterman (1989) Animal navigation. W. H. Freeman. Cited by: §II-A.
  • [40] K. Witte, S. Gierszewski, and L. Chouinard-Thuly (2017-02) (Guest Editorial) Virtual is the new reality. Current Zoology—Special Issue: Computer Animations and Virtual Reality in Animal Behavior Research 63, pp. 1–4. External Links: Document Cited by: §II-A.