Expecting the Unexpected: Developing Autonomous-System Design Principles for Reacting to Unpredicted Events and Conditions

01/16/2020 ∙ by Assaf Marron, et al. ∙ Weizmann Institute of Science ibm 8

When developing autonomous systems, engineers and other stakeholders make great effort to prepare the system for all foreseeable events and conditions. However, these systems are still bound to encounter events and conditions that were not considered at design time. For reasons like safety, cost, or ethics, it is often highly desired that these new situations be handled correctly upon first encounter. In this paper we first justify our position that there will always exist unpredicted events and conditions, driven among others by: new inventions in the real world; the diversity of world-wide system deployments and uses; and, the non-negligible probability that multiple seemingly unlikely events, which may be neglected at design time, will not only occur, but occur together. We then argue that despite this unpredictability property, handling these events and conditions is indeed possible. Hence, we offer and exemplify design principles that when applied in advance, can enable systems to deal, in the future, with unpredicted circumstances. We conclude with a discussion of how this work and a broader theoretical study of the unexpected can contribute toward a foundation of engineering principles for developing trustworthy next-generation autonomous systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Can an engineer of an autonomous system, like a self-driving car or an in-hospital delivery-assistant robot, predict all conditions and events that it may encounter?

Clearly the answer is negative, and there are many reasons: (1) The system will encounter new objects (and systems) that will be invented and created only after the system at hand is built and deployed; and, the users or owners of the deployed system may not apply relevant updates, if any are available at all. (2) For reasons of cost and time, developers sometimes choose to not consider certain less likely events, relying on other mitigating factors (like natural help from nearby humans) to handle such events. (3) Successful autonomous systems will be distributed world-wide, and at least some of them are likely to be deployed in environments that the developers were not familiar with and did not consider. (4) By their mobility and functional versatility autonomous systems will operate in rich environments characterized by many variables and actors (both humans and systems); even the most sophisticated testing and verification tools available today and in the near future cannot analyze, during system development, the exponentially-many combinations of concurrent or related actions and variable values. And, (5) any system is likely to be the target of malicious attacks, physical and cyber, by criminals, hackers and foreign bodies trying either to acquire some benefit or profit, or to satisfy some internal emotional desires; such attacks are likely to look for vulnerabilities in the systems, which by definition are those aspects that the developers did not prepare for and did not expect.

The question is then, is it realistic to expect autonomous systems of the future to be able to handle such unexpected events? After all, they were unexpected! Here, we claim, the answer is positive. This is not totally out of the question, as humans do that all the time, sometimes successfully and sometimes less so (malicious attacks are a special case). Humans are even required to react in certain ways: when some unexpected event occurs, and a human’s reaction (or non-action) is brought up before a judge in a court of law, say, because someone was hurt, or something important was damaged, such judge often considers what a reasonable person would have done in such a case.

We believe that systems, like humans, are also required to react properly upon first encounter with unexpected situations. Once next-generation autonomous systems are deployed, the world cannot restrict or constrain the introduction of innovation, and cannot stop the occurrence of unusual combinations of events until all autonomous systems are updated with instructions for dealing correctly with these now known innovations or conditions. And, the risks involved may well outweigh what society may handle with just insurance, or imposing regulation on people behavior, or dismissing as a low-probability pain or cost that is worth the benefit of the new system.

Thus, while the developers must always study the system and its target environments extensively in preparation for the future, there remains the question of to equip a system toward remaining unexpected encounters. In (Harel et al., 2019), Harel, Sifakis and Marron argue that a new foundation is needed for establishing engineering practices, solutions and tools for addressing unique problems that emerge with the advent of next generation autonomous systems. The problem of dealing with unpredicted—or even unpredictable—events and conditions, is one such problem.

In Sections 2,3, 4, and 5 we propose several engineering practices that can help toward successful handling of the always-impending occurrence of unexpected events and conditions. These practices and tips are organized in groups (which are indeed overlapping): reactive high level behaviors, knowledge that the system must have at its disposal, and considerations emanating from the system’s being a part of a larger humans-and-machines ecosystem. Then, in Section 6, we propose that the topic of dealing with the unexpected deserves a broader and deeper theoretical underpinning for a practical engineering foundation, and review related research that can support such a theoretical approach.

2. Reactive Behaviors and Skills

2.1. High-level Behavioral Reaction Specification

Throughout life in nature we observe behavioral patterns. For example, much has been written about Fight or Flight in nature. We propose to build into the system concrete behavioral rules that use abstract concepts, like “When sensing danger, get away”, “when under attack, find shelter”, “when you cannot understand what is happening, slow down”. Separate definitions can classify sensory data indicators as danger, or attack or as ‘not understood’, and assign various actuator capabilities in the particular system as implementing, e.g., ‘getting away’, ‘finding shelter’, or ‘slowing down’. For example, for an autonomous vehicle (AV) on a highway, slowing down may mean indeed reducing speed, where for a robot or vehicle for deliveries in a factory floor or plant yard (FFAV) this ‘slowing down’ may be implemented as a complete immediate stop. For getting away from danger in a city street, an AV may make a U-turn, while an FFAV may simply switch gear to reverse. Rules like these were probably in the minds of drivers in Figure 

1 who first saw the tsunami wave and immediately turned back.

As part of these concrete, yet high-level behaviors, the system may have not only ultimate goals and constraints, which are common concepts in system design, but also responsibilities. For example an autonomous taxi has goals (e.g., to transport passengers between two points), and requirements (e.g., to comply by various traffic and safety rules). However, if we also program into the system the more general responsibility for the passenger safety, additional behaviors may emerge upon certain unexpected events. Such behavior could mimic, say, what happens when a person drives a friend in a foreign city and a major failure occurs, or there is a major traffic jam. One would expect the driver to help the passenger find an alternative means of transportation as opposed to abandoning them.

2.2. Probing

When encountering an unfamiliar object or situation, a system can actively explore what it means. E.g, Assume that an FFAV’s narrow route in the plant yard is blocked by a large unknown object. The FFAV should be able to discover if the object is just a large empty cardboard box that was blown here by the wind. Standard cameras and basic actuators like an arm or even using slight and safe push by the FFAV body, combined with internet image lookup should be enough for detecting this fact. The robot can then decide whether to push the box aside, pass around it, or call for help. An important design implication here is that for probing certain unexpected objects and situations, the system may need to be equipped in advance with additional sensors and actuators, or have dynamic access to other such facilities (like security cameras or other robots) beyond the minimum required for completing the mission in a typical environment.

2.3. [Self-]Reflection

The system should be able to look at itself and recognize its own state and history, and use this information in its decision making. Unlike a poor fly trying again and again to go outside through a glass window, a system should be able to notice that it has been in the current state before, or that a certain action did not yield the desired results, and apply appropriate action. For example, from the movie  (Tedrake, 2015) it appears that the MIT robot in the DARPA competition of 2015 begins to shake, loses its balance and falls when it tries to step out of a vehicle while the vehicle is still moving. In addition to events indicating arrival at the destination and that it is time to step out of the car, it might be that having the robot sense that it is still moving forward, or sensing that it is beginning to shake, could be used in preventing the actual damaging fall (and if such reflection already exists in the robot design, perhaps deeper reflection might help).

2.4. Physical and Logical Look-Ahead

The farther the system can look ahead in space and in time, the less unexpected the future will be. A high-mounted 360-degree camera can provide a better view for an AV than a standard driver view. Access to plant security cameras can provide an FFAV an up-to-date real time map of all stationary and moving objects related to its path and mission. Run time simulation can provide predictions, e.g., whether the observed stationary car that just turned on its lights will start moving, or will the system’s battery charge be enough for completing the particular mission under way.

2.5. Preparing Alternative Solutions

Autonomous systems should be designed with multiple alternative solutions for completing their tasks. For example, an autonomous taxi system that sends an AV for carrying a tourist in a foreign city, should not only be able to negotiate all traffic jams and route diversions through alternative routes. Should the system realize that it cannot complete its mission, it should not abandon the passenger, but be able to automatically transfer him or her into another mode of transportation—such as taking them to a train station or calling another taxi on their behalf (possibly with the help of a control center).

3. Knowledge and Skill Acquisition

3.1. Knowledge of system’s own capabilities

The availability of the system’s sensors and actuators should be declared generally, and be accessible by multiple components, rather then be only invoked by mission-specific code. In this way, scenarios for providing camera views to other programmed functions or for human assistant, or operating the motors upon demand to handle unexpected conditions could emerge from automated planning when facing new conditions. Such knowledge can also be called upon when the autonomous system at hand is called upon to help other systems, carrying out functions beyond its original function.

3.2. Access to General World Knowledge

For carrying out many of the other functions listed here, the system must have common knowledge about the world: from a physics engine that understands gravity, speeds, friction, impact and more, to predicting object behaviors (e.g., stationary boxes on the floor don’t start moving on their own; people, on the other hand do; the speed of a human walking is at most X, and the speed of an electric scooter can reach Y in a few seconds). This knowledge can be local to the autonomous system, or it can be accessed on other systems or servers.

3.3. Automated Run-time Knowledge Acquisition

Systems can obtain valuable information in real time. One can only wonder if any of the drivers turning back in Figure 1 did so not because of what they saw, but because they just heard a tsunami warning siren or announcement. Collecting weather and road condition information, AVs can thus ‘predict’ that their planned route is impassable, and an FFAV checking locations of humans in a plant may discover that there is no person to receive the current delivery, and perhaps reschedule the entire trip. This aspect is, of course, closely related to the look-ahead in Section 2.4.

3.4. Learning and Adaptivity

Having systems learning from their own successes and failures and other aspects of experience is a key theme in modern engineering and is crucial also in handling unexpected events. We propose that learning and adaptivity can be applied in a broader way. E.g., if an AV, or a system of AVs, also learns and shares knowledge about the behavior of objects that they see in their environment, such as strange agricultural machinery in a field, it is possible that an AV would handle better the very first situation in which any of these objects chances to cross an AV’s path.

4. Considering the System as a Social Entity

Humans often deal with unexpected circumstances by relying on various forms of interactions with others. In this section we discuss certain design considerations and proactive work on behalf of the system, which mimic such social reliance and collaboration, and which can reduce unexpected circumstances and facilitate handling them when they do occur.

4.1. Understanding the System’s Role

“No Man is an Island” says John Donne. The same also holds for systems, especially autonomous ones. Relating to the earlier discussion of responsibilities, it is important to articulate roles and responsibilities. Who is responsible for getting an executive from an airport to an important meeting in a foreign city, safely and on time? The taxi AV? the taxi company? the executive himself or herself? the company for which the executive works and has arranged for the trip? And then, if there is a problem with the AV and one looks for alternatives, whose responsibility is it to provide another vehicle, or arrange for another mode of transport, or to decide to send someone else to the meeting, or to arrange for a teleconference instead? Determining these roles in advance can be very useful in handling the unexpected.

4.2. Mimicking Others

Some of drivers in Figure 1 probably did not see the tsunami wave themselves and did not hear about it on the radio, but reacted to the other drivers’ sudden reversal and flight. Systems should be able to apply such rules as well, when appropriate. E.g., when an AV sees two lanes going in the same direction as the AV, in one there is a long line of queued car, and the other lane completely empty, it should probably get in the line; it could assume that the other lane is blocked and should not be used.

4.3. Asking for Help and Support

Prompting a human for advice or action at important decision points as well as in carrying out tasks that the system cannot do itself, is a standard and common practice in system design, so it is only natural that autonomous systems facing unexpected circumstances should do the same. However, this latter context requires special attention and perhaps different design. E.g., who should be contacted, a human or a machine? should the contacted agent be in a remote central control function or physically close to the system? what skills and capabilities must the helping agent (human or machine) have for making the decision or taking action? What information should the autonomous communicate? What interfaces are needed at the autonomous system to enable such help? And so on.

4.4. Enabling Passive Acceptance of Help

An autonomous system may not always be aware that it is need of help, but should nevertheless facilitate such help. A passer by should be able to communicate to an AV that there is some alarming danger ahead; or, if the system is stuck in a place where there is no outside communication, it should be able to communicate (through computer display or local data communications, or just a fixed printed sticker) to humans and mobile devices enough information about its identity (and, if possible, current state) so that other agents can act at their own initiative, or transmit the information onward to the owners of the system from another location.

4.5. Communicating Overall Plans and Present Intentions

Turn signals and brake lights are simple facilities in ordinary vehicles which indicate the driver’s intention and thereby prevent many accidents and reduce overall stress. Autonomous systems can communicate their intentions much more broadly with great effect. An AV can communicate its planned route, helping with road congestion prediction; it can also indicate from far away when it plans to stop at a crosswalk, allowing pedestrians to move forward sooner. An industrial robot can communicate its intended next steps over the next few seconds or minutes, allowing humans to work more safely closer to it. Etc.

4.6. Recording and Sharing Static and Dynamic Knowledge

Trust in the system, and/or in the organization controlling it, can be dramatically enhanced by providing information, and contribute to mitigation and handling of unexpected conditions. One aspect is sharing static information about how the system is built and how it is programmed to behave. Another is collecting and sharing dynamic information both about the systems own behavior and about its environment. The latter can be passive - as in sharing security camera recordings, or it can be proactive, reporting current conditions, changes relative to known maps, hazards, or even reporting humans and other devices that it has encountered and who may need help.

4.7. Negotiations

Many times dealing with the unexpected is difficult because of constraints and specific instructions imposed at design time. However, many constraints in life are bendable. Human drivers often communicate with others to resolve issues like being in the wrong lane; people standing in a queue can negotiate to allow one who is in a hurry to proceed; and multiple people in a room can find ways to have everyone feel comfortable with the temperature, combining opening or closing of windows, settings of A/C, wearing jackets, and/or rearranging the seating. When encountering unexpected events and conditions, autonomous systems should be able to participate in such negotiations with a variety of entities in their environment, by communicating their desires, what they can and cannot change, understanding the (changing) needs of others, and adjusting the goals and plans.

5. Additional Development Methods and Practices

In addition to straightforward studies of the system and its environment, we found the following practices useful in discovering imminent events and conditions: (1) brainstorming with colleagues and domain experts; (2) considering the system’s operation in related but more challenging environments, e.g., instead of an intended modern quiet suburban neighborhood, consider the given vehicle in a busy narrow alley of a medieval town packed with tourists; or, instead of the targeted hospital environment, consider a factory floor or plant yard; then, reapply relevant aspects of the new conditions back to the original environment; (3) challenging developers and colleagues to create failure situations for the system; (4) studying famous failures in related systems and environments; (5) building models of the environment and the system, running simulations, and then examining simulation runs for emerging properties that can suggest vulnerabilities; then, extend this rather standard analysis by noting aspects that are particularly synthetic and oversimplified in the simulation setup, and explore these aspects for unexpected events and conditions; e.g., if one notices that there is very low variability in pedestrian behaviors in an AV simulation, one can think of what the system would do if it encounters a performing street dancer or juggler, or a drunk brawl outside a bar (regardless of whether the current simulator can present these or not).

6. Toward an Engineering Foundation and “Science of the Unexpected”

We believe that preparing systems to deal with the unexpected should be one of the topics covered by the Autonomics engineering foundation discussed in (Harel et al., 2019). Going beyond the specific techniques listed in the preceding sections, the foundation should cover the topic with systematic engineering methodologies for all phases of development, reusable specification libraries, domain-specific ontologies, simulation, testing and verification tools, and more.

In addition we believe that there will be a need for a theoretical apparatus for the domain. This should facilitate rigorous and broad study of the unexpected, ensure fresh examination and evolution of the solutions as systems and the real world evolve, and, enable critical assessment of the quality (e.g., completeness, ambiguity) of all associated practices and artifacts.

To evolve such theoretical instrumentation, different perspectives, models or methodologies may be adopted. Below we briefly discuss two such perspectives, and acknowledge that there is still much more to be investigated.

One possible perspective is to incorporate concepts from digital twin approaches (El Saddik, 2018), or more fundamentally, to use a representational model as part of the deep-structure view of an Information System (IS) as proposed by Wand and Weber (Wand and Weber, 1995). According to this view, an IS is seen as a digital means for representing some domain or some part of the real-world. For example, a camera complemented by automatic object detection algorithm and data persistence should faithfully represent the elements anticipated to exist in the real-world environment in which the system is deployed.

This view has also given rise to the adoption of various ontological theories, such as the one by Bunge (Bunge, 1978), for determining the capacity of any IS to faithfully represent its real-world environment. This way, an ontological taxonomy can determine, for example, the validity of any object instantiated in a system, i.e., whether it has meaning or semantics in the domain, regardless of its concrete classification in the IS. Instead, each such object can belong to some corresponding ontological concept such as a thing or an event in the real world. This approach can assist in formal definitions of the unexpected, like:

Definition 6.1 ().

Unexpected. We say that an object or an event-occurrence which is associated with a particular concept in a given ontology is unexpected in the context of a given system, if either (a) the ontological concept (i.e, class) does not have a representation in the system, or (b) the system cannot effectively associate the object with the (existing) system representation of the ontological concept, dynamically, at run time.

According to the ontological view and the above definition, we can now also conclude that any IS, being a human-made artifact, is always susceptible to ontological deficit, where it may create or encounter objects that lack a corresponding interpretation in the system. Recognizing this deficit in real time and perhaps transferring the object for handling by others can be part of a design for dealing with the unexpected.

Different systems may also differ in the way they implement the classification mechanism. For example, the work by Parsons and Wand (Parsons and Wand, 2000) has criticized the assumption of inherent classification under which an object can only be recorded if it a member of a certain class. The same implementation also mandates the specification of each class via the collection of its objects (i.e., classification-by-containment). An example of a mechanism that conforms to this approach is the traditional relational database (Codd, 1970). In object oriented programming multiple inheritance and interface address some of these issues, but the underlying classification remains.

A system that does not conform to the assumption of inherent classification enables the recording of objects independent of their class membership(s). Further in (Parsons and Wand, 2000), the authors suggest one such alternative in which a classification-by-property approach is employed. This approach entails articulation of objects and of classes independently of each other, and specifying classes and objects via their properties. For example the class(dangerous-species) may be defined as any thing which possesses any one of the properties {eye-size = big, teeth-size = big, crawling-ability = true}. Similarly the object(snake1) may be defined as a thing that possesses the properties {location = location1, crawling-ability = true}.

Decoupling specifications of classes and objects is essential for dealing with the existence of the unknown, i.e., objects in the domain that have no meaning in the system. The importance of a specification as given in the above example is not only in allowing the ad-hoc determination of snake1 as being an instance of the class dangerous-species. It is also in allowing an independent representation of an object like snake1 regardless of knowing its class. An unexpected object produced in or encountered by a system that conforms to the assumption of inherent classification is at risk of being either incorrectly classified or altogether dropped. Both may lead to undesired outcomes.

Another perspective is a sociotechnical one, where a human encounters unexpected events or conditions and a system augments the person’s capabilities in dealing with the situation (Orlikowski and Iacono, 2001) while mitigating biases. For example, a security officer in a large store may be monitoring the store via multiple screens. According to the

representativeness heuristic

introduced by  (Tversky and Kahneman, 1974), the officer, as a human being, may be focused on certain kinds of people, based, e.g., on appearance, as opposed to focusing on actual behavior.

Finally, in studies of neurology, psychology, philosophy, many issues of reconciling reality, perception and expectations have been discussed at length (see, e.g., (Press et al., 2019) and references therein). Advancing insights about the human mind may well feed into our engineering goals.

Acknowledgements.
This work has been supported in part by a grant from Israel Science Foundation. The authors thank (names to be added in Camera-Ready version) for valuable discussions and suggestions.

References