Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs

04/25/2020 ∙ by Tao Yuan, et al. ∙ 2

Aiming to understand how human (false-)belief–a core socio-cognitive ability–would affect human interactions with robots, this paper proposes to adopt a graphical model to unify the representation of object states, robot knowledge, and human (false-)beliefs. Specifically, a parse graph (pg) is learned from a single-view spatiotemporal parsing by aggregating various object states along the time; such a learned representation is accumulated as the robot's knowledge. An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning and inference capability to overcome the errors originated from a single view. In the experiments, through the joint inference over pg-s, the system correctly recognizes human (false-)belief in various settings and achieves better cross-view accuracy on a challenging small object tracking dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The seminal Sally-Anne [1] study has spawned a vast research literature in developmental psychology regarding Theory of Mind (ToM); in particular, human’s socio-cognition in understanding false-belief—the ability to understand other’s belief about the world may contrast with the true reality. A cartoon version of the Sally-Anne test is shown in the left of Fig. 1: Sally puts her marble in the box and left. While Sally is out, Anne moves the marble from the box to a basket. The test would ask a human participant where Sally would look for her marble when she is back. In this experiment, the marble would still be inside the box according to Sally’s false-belief, even though the marble is actually inside the basket. To answer this question correctly, an agent should understand and disentangle the object state (observation from the current frame), the (accumulated) knowledge, the belief of other agents, the ground-truth/reality of the world, and importantly, the concept of false-belief.

The prior study suggests that at the age of 4 years old, children begin to develop the capability to understand false-belief [2]. Such abilities to ascribe the mental belief to the human mind, to differentiate belief from the physical reality, and even to recognize false-belief and perform psychological reasoning, is a significant milestone in the acquisition of ToM [3, 4]. Such evidence emerged from developmental psychology in the past few decades call for integrating such socio-cognitive aspects into a modern social robot [5].

Fig. 1: Left: Illustration of the classic Sally-Anne test [1]. Middle and Right: Two different types of false-belief scenarios in our dataset: belief test and helping test.

In fact, false-belief is not rare in our daily life. Two examples are depicted in the middle and the right of Fig. 1: (i) Where does Bob think his cup1 is after Charlie put cup2 (visually identical to cup1) on the table while Dave took cup1 away? (ii) Which milk box should Alice give to Bob if she wants to help? The one closer to Bob but empty, or the one further to Bob but full? Although such false-belief tasks are primal examples for social and cognitive intelligence, current state-of-the-art intelligent systems are still facing challenges in acquiring such a capability in the wild with noisy visual input (see Related Work for discussion).

One fundamental challenge is the lack of proper representation for modeling the false-belief from visual input; it has to be able to handle the heterogeneous information of a system’s current states, its accumulated knowledge, agent’s belief, and the reality/ground-truth of the world. Without a unified representation, the information across all these domains cannot be easily interpreted, and the cross-domain reasoning of the events is infeasible.

Largely due to this difficulty, prior work that takes noisy sensory input can only solve a sub-problem in understanding false-belief. For instance, sensor fusion techniques are mainly used to obtain better state estimation by filtering the measurements from multiple sensors 

[6]. Similarly, the Multiple View Tracking (MVT

) in computer vision is designed to combine the observations across camera views to better track an object. Visual cognitive reasoning (

e.g., human intention/attention predictions [7, 8, 9, 10]) only targets to model human mental states. These three lines of work are all crucial ingredients but developed independently; a unified cross-domain representation is still largely missing.

In order to endow such an ability to understand the concept of false-belief to a robot system from noisy visual inputs, this paper proposes to use a graphical model represented by a parse graph (pg[11] to serve as the unified representation of a robot’s knowledge structure, fused knowledge across all robots, and the (false-)beliefs of human agents. A pg is learned from the spatiotemporal transition of humans and objects in the scene perceived by a robot. A joint pg can be induced by merging and fusing the individual pg from each robot to overcome the errors originated from a single view. In particular, our system enables the following three capabilities with increasing depth in cognition:

  1. [leftmargin=*,noitemsep,nolistsep]

  2. Tracking small objects with occlusions across different views. Human-made objects in an indoor environment (e.g., cups) are oftentimes small with a similar appearance. Tracking such objects could be challenging due to occlusions with frequent human interactions. The proposed method can address the challenging multi-view multi-object tracking problem by properly maintaining cross-view object states using the unified representation.

  3. Inferring human beliefs. The state of an object normally does not change unless a human interacts with it; this observation shares a similar spirit in human cognition known as object permanence [12]. By identifying the interactions between humans and objects, our system also supports the high-level cognitive capability; e.g., knowing which object is interacted with which person, whether a person knows the state of the object has been changed.

  4. Assisting agents by recognizing false-belief. Giving the above object tracking and cognitive reasoning of human beliefs, the proposed algorithm can further infer whether and why the person has false-belief, thereby to better assist the person given a specific context.

I-a Related Work

Robot ToM, aiming at understanding human beliefs and intents, receives increasing research attentions in human-robot interaction and collaboration [13, 14]. Several false-belief tasks akin to the classic Sally-Anne test were designed. For instance, Warnier et al[15] introduced a belief management algorithm, and the reasoning capability is subsequently endowed to a robot to pass the Sally-Anne test [16] successfully. More sophisticated human-robot collaboration is achieved by maintaining a human partner’s mental state [17]. More formally, Dynamic Epistemic Logic is introduced to represent and reason about belief and false-belief [18, 19]. These successes are, however, limited to the symbolic-based belief representations, requiring handcrafted variables and structures, making the logic-based reasoning approaches brittle in practice to handle noises and errors. To address this deficiency, this paper utilizes a unified representation by pg, a probabilistic graphical model that has been successfully applied to various robotics tasks, e.g., [20, 21, 22]

; it accumulates the observations over time to form a knowledge graph and robustly handles noisy visual input.

Multi-view Visual Analysis is widely applied to 3D reconstruction [23], object detection [24, 25], cross-view tracking [26, 27], and joint parsing [28]. Built on top of these modules, Multiple Object Tracking (MOT) usually utilizes tracking-by-detection techniques [29, 30, 31]. This line of work primarily focuses on combining different camera views to obtain a more comprehensive tracking, lacking the understanding of human (false-)belief.

Visual Cognitive Reasoning is an emerging field in computer vision. Related work includes recovering incomplete trajectories [32], learning utility and affordance [33], inferring human intention and attention [9, 10], etc. As to understanding (false-)belief, despite many psychological experiments and theoretical analysis [34, 35, 36, 37], very few attempts have been made to solve (false-)belief with visual input; handcrafted constraints are usually required for specific problems in prior work. In contrast, this paper utilizes a unified representation across different domains with heterogeneous information to model human mental states.

I-B Contribution

This paper makes three contributions:

  1. [leftmargin=*]

  2. We adopt a unified graphical model pg to represent and maintain heterogeneous knowledge about object states, robot knowledge, and human beliefs.

  3. On top of the unified representation, we propose an inference algorithm to merge individual pg from different domains across time and views into a joint pg, supporting human belief inference from multi-view to overcome the noises and errors originated from a single view.

  4. With the inferred pgs, our system can keep track of the state and location of each object, infer human beliefs, and further discover false-belief to better assist human.

I-C Overview

The remainder of the paper is organized as follows. Sections III and II describe the representation and the detailed probabilistic formulation, respectively. We demonstrate the efficacy of the proposed method in Section IV and conclude the paper with discussions in Section V.

Fig. 2: System overview. The robot pgs are obtained from each individual robot’s view. The joint pg can be obtained by fusing all robots’ pgs. The belief pgs can be inferred from the joint pg. All the pgs are optimized simultaneously under the proposed joint parsing framework to enable the queries about the object states and human (false-)beliefs.

Ii Representation

In this work, we use the parse graph (pg)—a unified graphical model [11]—to represent (i) the location of each agent and object, (ii) the interactions between agents and objects, (iii) the beliefs of agents, and (iv) the attributes and states of objects; see Fig. 2 for an example. Specifically, three different types of pgs are utilized:

  • [leftmargin=*,noitemsep,nolistsep]

  • Robot pg, shown as blue circles, maintains the knowledge structure of an individual robot, which is extracted from its visual observation—an image. It also contains attributes that are grounded to the observed agents and objects.

  • Belief pg, shown as red diamonds, represents the inferred human knowledge by each robot. Each robot maintains the parse graph for each agent it observed.

  • Joint pg fuses all the information and views across a set of distributed robots.

Notations and Definitions

The input of our system can be represented by synchronized video sequences with length captured from robots. Formally, a scene is expressed as

(1)

where and denote the set of all the tracked objects ( objects in total) and the set of all the tracked agents ( agents in total) at time , respectively.

Object is represented by a tuple: bounding box location , appearance feature , states , and attributes ,

(2)

where is an index function: indicates the object is held by the agent at time , and means it is not held by any agent at time .

The agent is represented by its body key-point position and appearance feature

(3)

Robot Parse Graph is formally expressed as

(4)

where is the area where th robot can observe at time .

Belief Parse Graph is formally expressed as

(5)

where represents the inferred belief of agent under robot ’s view; is the last time that the robot observes the human . We assume that the agent only keeps the objects s/he observed last time in this area in mind, which satisfies the Principle of Inertia: an agent’s belief is preserved over time unless the agent gets information to the contrary.

Joint Parse Graph keeps track of all the information across a set of distributed robots, formally expressed as

(6)

Objective

The objective of the system is to jointly infer all the parse graphs so that it can (i) track all the agents and objects across scenes at any time by fusing the information collected by a distributed system, and (ii) infer human (false-)beliefs to provide assistance.

Iii Probabilistic Formulation

We formulate the joint parsing problem as a maximizing a posterior (MAP) inference problem

(7)

where is the prior, and is the likelihood.

Iii-a Prior

The prior term models the compatibility of the robot pgs and the joint pg, and the compatibility of the joint pg over time. Formally, we can decompose the prior as

(8)

where the first term

is the transition probability of the joint

pg over time, further decomposed as

(9)
(10)

The second term is the probability which models the compatibility of individual pgs and the joint pg. Its energy can be decomposed into three energy terms

(11)

Below, we detail the above six energy terms in Eqs. 11 and 10.

Motion Consistency

The term measures the motion consistency of objects and agents in time, defined as

(12)

where is the distance between two bounding boxes or human poses, is the speed threshold, and is the indicator function. If an object is held by an agent , we use the agent’s location to calculate of the object.

State Transition Consistency

The term is the state transition energy, defined as

(13)

where the state transition probability is learned from the training data.

(a) Put down a cup
(b) Pick up a cup
(c) Move a cup
(d) Carry a cup to another room
(e) Swap two cups
Fig. 3: Examples of human-object interactions in the cross-view subset of the proposed dataset. Each scenario contains at least one kind of false-belief test or helping test recorded with four robot camera views.

Appearance Consistency

measures appearance consistency. In robot pg

s, the appearance feature vector

is extracted by a deep person re-identification network [38]. In the joint pg, the feature vector is calculated by the mean pooling of all features for the same entity from all robot pgs

(14)

Spatial Consistency

Each object and agent in the robot’s 2D view should also have a corresponding 3D location in the real-world coordinate system, and such correspondence should remain consistent when projected any points from the robot image plane back to the real-world coordinate. Thus, spatial consistency is defined as

(15)

where is the 3D positions in the real-world coordinate, and is the transformation function that projects the points from the robot’s 2D view to the 3D real-world coordinate.

Attribute Consistency

Attributes of each entity should remain the same across time and viewpoints. Such an attribute consistency is defined by the term

(16)

Iii-B Likelihood

The likelihood term models how well each robot can ground the knowledge in its pg to the visual data it captures. Formally, the likelihood is defined as

(17)

The energy of term can be further decomposed as

(18)
(19)

where

can be calculated by the score of object detection or human pose estimation, and

can be obtained by the object attributes classification scores.

Iii-C Inference

Given the above probabilistic formulation, we can infer the best by an MAP estimate. It can be solved by two steps: (i) Each robot individually processes the visual input; the output (e.g., object detection, and human pose estimation) can be aggregated as the proposals for the second step. (ii) The MAP estimate can be transformed to an assignment problem given the proposals, solvable using the Kuhn-Munkres algorithm [39, 40] in polynomial time.

Based on Eq. 5, robot can generate belief parse graphs for agent after obtaining the robot graphs .

Iv Experiment

We evaluate the proposed method in two setups: cross-view object tracking and human (false-)belief understanding. The first experiment evaluates the accuracy of object localization using the proposed inference algorithms, focusing on the robot parse graphs and the joint parse graph . The second experiment evaluates the inference of the belief parse graphs , i.e., human beliefs regarding the object states (e.g., locations) in both single-view and multi-view settings.

Iv-a Dataset

The dataset includes two subsets, a multi-view subset and a single-view subset. Ground-truth tracking results of objects and agents, and states and attributes of objects are all annotated for evaluation purpose.

  • [leftmargin=*,noitemsep,nolistsep]

  • The single-view subset includes 5 different false-belief scenarios with frames. Each scenario contains at least one kind of false-belief test or helping test. In this subset, objects are not limited to the cups.

  • The multi-view subset consists of 8 scenes, each shot with 4 robot camera views, making a total number of frames. Each scenario contains at least one kind of false-belief test. The objects in each scene are, however, limited to the cups: 12-16 different cups made with 3 different materials (plastic, paper, and ceramic) and 4 colors (red, blue, white, and black). In each scene, three agents interact with cups by performing actions depicted in Fig. 3.

Iv-B Implementation Details

Below, we detail the implementations of the system.

  • [leftmargin=*,noitemsep,nolistsep]

  • Object detection: we use the RetinaNet model [41] pre-trained on the MS COCO dataset [42]. We keep all the bounding boxes with a score higher than the threshold , which serve as the proposals for object detection.

  • Human pose estimation: we apply the AlphaPose [43].

  • Object attribute classification: A VGG16 network [44]

    was trained to classify the color and the material of the objects.

  • Appearance feature: A deep person re-id model [38] was fine-tuned on the training set.

  • Due to the lack of multi-view in the single-view setting, we locate the object that an agent plan to interact by simply finding the object closest to the direction the agent points at according to the key points on the arm.

0.91

[width=]figure/human/000201_earth.png Room 1

[width=]figure/human/000201_mars.png Room 2

(a) Observation
(b) Observation
(c) Observation
(d) Observation
(e) Observation
Fig. 4: A sample result in Experiment 1. Two rows show examples of synchronized visual inputs in different rooms. Two red cups marked by green and yellow bounding boxes are highlighted. At time , agent put a red cup in Room 1. At , agent put another red cup in Room 1. At , agent took away cup and put it in Room 2 at . Our system can robustly perform such a complex multi-room multi-view tracking and reason about agent’s belief. For instance, at , the tracking system knows that the cup appeared at is in fact in Room 2 and inferred that human thinks the cup is still in Room 1.
interactions 0 1 2 3 Overall
Parsing w/o humans acc. 0.98 0.82 0.78 0.75 0.82
Joint parsing acc. 0.98 0.86 0.85 0.82 0.88
TABLE I: Accuracy of cross-view object tracking

Iv-C Experiment 1: Cross-view Object Localization

To test the overall cross-view tracking performance, 2000 queries are randomly sampled from the ground-truth tracks. Each query can be formally described as

(20)

where the tuple indicates the object shown in robot ’s view located in bounding box at time . Such a form of the query can be very flexible. For instance, if we ask about the location of that object at time , the system should return an answer in the form of , meaning that the system predicts the object is shown in robot ’s view at .

The system generates the answer in two steps. It firstly locates the query of the object by searching the object in with the smallest distance to the bounding box . Then it returns the location from . The accuracy of model can be calculated as

(21)

where is the number of queries, is the ground-truth bounding box, and is the inferred bounding box returned by model . We calculate the Intersection over Union (IoU) between the answer and the ground-truth bounding boxes; the answer is correct if and only if the answer predicts the right view and the IoU is larger than .

Table I shows the ablative study by turning on and off the joint parsing component that models human interactions, i.e., whether the model parses and tracks objects by reasoning about the interaction with agents. “# interactions” means how many times the object was interacted by agents. The result shows that our system achieves an overall 88% accuracy. Even without parsing humans, our system still possesses the ability to reason about object location by maintaining other consistencies, such as spatial consistency and appearance consistency. However, its performance drops significantly if the object was moved to different rooms. Figure 4 shows some qualitative results.

0.97

True Belief False-Belief Overall
Joint parsing acc. 0.94 0.93 0.94
Random guessing acc. 0.45 0.53 0.46
TABLE II: Accuracy of belief queries on single view subset

Iv-D Experiment 2: (False-)Belief Inference

In this experiment, we evaluate the performance of belief and false-belief inference, i.e., whether an agent’s belief pg is the same as the true object states. The evaluations were conducted on both single-view and multi-view scenarios.

Multi-view

We collected 200 queries with ground-truth annotations that focus on the Sally-Anne false belief task. The query is defined as

(22)

where first three terms define the objects in robot ’s view located at at time . Similarly, another three terms define an agent in robot ’s view located at at time . The question is: where does the agent think the object is at time ?

Our system generates the answer in three steps: (i) search for the object and the agent in robot parse graphs and , (ii) retrieve all the belief parse graphs at time to find the object ’s location in human ’s belief, and (iii) find an object in robot parse graph, which has the same attributes as ’s and has smallest distance to . The system finally returns ’s location as the answer.

Since there is no publicly available code on this task, we compare our inference algorithm with a random baseline model as the reference for future benchmark; it simply returns an object with the same attributes as the query object at . The result shows that our system achieves accuracy, while the baseline model only has accuracy.

[width=]figure/result/00-00.jpg w/ false-belief

[width=]figure/result/01-00.jpg w/o false-belief

(a) Observation
(b) Observation
(c) Observation
(d) Observation
(e) Prediction
Fig. 5: Two sample results of the Sally-Anne false-belief task in Experiment 2. (Top) with false-belief. (Bottom) Without false-belief. At time , the first person put the cup noodle () into the microwave and leaves. The second person take out the cup noodle from the microwave at . At time , put his own cup noodle in the microwave and left. When returns to the room at , our system is able to answer: where does the first person think the cup noodle is. It can successfully predict the person in the bottom row will not have the false-belief due to the different color attributes of the cups.
(a) Observation
(b) Observation
(c) Prediction
(d) Ground-truth
Fig. 6: Two sample results of the helping task in Experiment 2. (Top) At time , the second person () enters the room and empties the box . Question: which box should the first person () give to the third person () if the first person () wants to help at ? Answer returned by the system correctly infers that has a false-belief, and should give another box () to , rather than the his is reaching for. (Bottom) Since the person observes the entire process, there is no false-belief; in this case, reaches for is to throw it away.

Single-view

We collected a total of 100 queries, including two types of belief inference tasks: the Sally-Anne false-belief task and the helping task, as shown in Fig. 1. The queries have two forms

(23)

indicating two different types of questions: (i) where does the agent think the object is at time ? (ii) Which object will you give to the agent at time if you would like to help? For the first type of questions, i.e., the Sally-Anne false-belief task, similar to the multi-view setting, the system should return the object bounding box as the answer. For the second type of question, i.e., the helping task, the system first infers whether the agent has false-belief. If not, the system returns the object the person wants to interact based on their current pose; otherwise, the system returns another suitable object closest to them. Qualitative results are shown in Figs. 6 and 5, and quantitative results are provided in Table II.

V Conclusion and Discussions

In this paper, we describe the idea of using pg as a unified representation for tracking object states, accumulating robot knowledge, and reasoning about human (false-)beliefs. Based on the spatiotemporal information observed from multiple camera views of one or more robots, robot pg and belief pg are induced and merged to a joint pg to overcome the possible errors originated from a single view. With such a representation, a joint inference algorithm is proposed, which possesses the capabilities of tracking small occluded objects across different views and inferring human (false-)beliefs. In experiments, we first demonstrate that the joint inference over the merged pg produced better tracking accuracy. We further evaluate the inference on human true- and false-belief regarding objects’ locations by jointly parsing the pgs. The high accuracy demonstrates that our system is capable of modeling and understanding human (false-)beliefs, with the potential of helping capability as demonstrated in developmental psychology.

ToM and Sally-Anne test are interesting and difficult problems in the area of social robotics. For a service robot to interact with humans in an intuitive manner, it must be able to maintain a model of the belief states of the agents it interacts with. We hope the proposed method using a graphical model has demonstrated a different perspective compared to prior methods in terms of flexibility and generalization. In the future, a more interactive and active set up would be more practical and compelling. For instance, by integrating activity recognition modules, our system should be able to perceive, recognize, and extract richer semantic information from the observed visual input, thereby providing more subtle (false-)belief applications. Communication, gazes, and gestures are also crucial in intention expression and perception in collaborative interactions. By incorporating these essential ingredients and taking advantage of the flexibility and generalization of the model, our system should be able to go from the current passive query to active response to assist agents in real-time.

1

References

  • [1] S. Baron-Cohen, A. M. Leslie, and U. Frith, “Does the autistic child have a “theory of mind”?,” Cognition, vol. 21, no. 1, pp. 37–46, 1985.
  • [2] A. Gopnik and J. W. Astington, “Children’s understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction,” Child development, pp. 26–37, 1988.
  • [3] O. N. Saracho, “Theory of mind: children’s understanding of mental states,” Early Child Development and Care, vol. 184, no. 6, pp. 949–961, 2014.
  • [4] H. Wimmer and J. Perner, “Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception,” Cognition, vol. 13, no. 1, pp. 103–128, 1983.
  • [5] C. Breazeal, J. Gray, and M. Berlin, “An embodied cognition approach to mindreading skills for socially intelligent robots,” International Journal of Robotics Research (IJRR), vol. 28, no. 5, pp. 656–680, 2009.
  • [6] M. Liggins II, D. Hall, and J. Llinas, Handbook of multisensor data fusion: theory and practice. CRC press, 2017.
  • [7] H. Koppula and A. Saxena, “Learning spatio-temporal structure from rgb-d videos for human activity detection and anticipation,” in

    International Conference on Machine Learning (ICML)

    , 2013.
  • [8] S. Qi, S. Huang, P. Wei, and S.-C. Zhu, “Predicting human activities using stochastic grammar,” in International Conference on Computer Vision (ICCV), 2017.
  • [9] L. Fan, Y. Chen, P. Wei, W. Wang, and S.-C. Zhu, “Inferring shared attention in social scene videos,” in

    Conference on Computer Vision and Pattern Recognition (CVPR)

    , 2018.
  • [10] P. Wei, Y. Liu, T. Shu, N. Zheng, and S.-C. Zhu, “Where and why are they looking? jointly inferring human attention and intentions in complex tasks,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [11] S.-C. Zhu, D. Mumford, et al., “A stochastic grammar of images,” Foundations and Trends® in Computer Graphics and Vision, vol. 2, no. 4, pp. 259–362, 2007.
  • [12] R. Baillargeon, E. S. Spelke, and S. Wasserman, “Object permanence in five-month-old infants,” Cognition, vol. 20, no. 3, pp. 191–208, 1985.
  • [13] B. Scassellati, “Theory of mind for a humanoid robot,” Autonomous Robots, vol. 12, no. 1, pp. 13–24, 2002.
  • [14] A. Thomaz, G. Hoffman, M. Cakmak, et al., “Computational human-robot interaction,” Foundations and Trends® in Robotics, vol. 4, no. 2-3, pp. 105–223, 2016.
  • [15] M. Warnier, J. Guitton, S. Lemaignan, and R. Alami, “When the robot puts itself in your shoes. managing and exploiting human and robot beliefs,” in International Symposium on Robot and Human Interactive Communication (RO-MAN), 2012.
  • [16] G. Milliez, M. Warnier, A. Clodic, and R. Alami, “A framework for endowing an interactive robot with reasoning capabilities about perspective-taking and belief management,” in International Symposium on Robot and Human Interactive Communication (RO-MAN), 2014.
  • [17] S. Devin and R. Alami, “An implemented theory of mind to improve human-robot shared plans execution,” in ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016.
  • [18] T. Bolander, “Seeing is believing: Formalising false-belief tasks in dynamic epistemic logic,” in Jaakko Hintikka on Knowledge and Game-Theoretical Semantics, pp. 207–236, Springer, 2018.
  • [19] E. Lorini and F. Romero, “Decision procedures for epistemic logic exploiting belief bases,” in International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2019.
  • [20]

    M. Edmonds, F. Gao, X. Xie, H. Liu, S. Qi, Y. Zhu, B. Rothrock, and S.-C. Zhu, “Feeling the force: Integrating force and pose for fluent discovery through imitation learning to open medicine bottles,” in

    International Conference on Intelligent Robots and Systems (IROS), 2017.
  • [21] H. Liu, Y. Zhang, W. Si, X. Xie, Y. Zhu, and S.-C. Zhu, “Interactive robot knowledge patching using augmented reality,” in International Conference on Robotics and Automation (ICRA), 2018.
  • [22] M. Edmonds, F. Gao, H. Liu, X. Xie, S. Qi, B. Rothrock, Y. Zhu, Y. N. Wu, H. Lu, and S.-C. Zhu, “A tale of two explanations: Enhancing human trust by explaining robot behavior,” Science Robotics, vol. 4, no. 37, 2019.
  • [23] M. Hofmann, D. Wolf, and G. Rigoll, “Hypergraphs for joint multi-view reconstruction and multi-object tracking,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
  • [24] J. Liebelt and C. Schmid, “Multi-view object class detection with a 3d geometric model,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2010.
  • [25] A. Utasi and C. Benedek, “A 3-d marked point process model for multi-view people detection,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
  • [26] J. Berclaz, F. Fleuret, E. Turetken, and P. Fua, “Multiple object tracking using k-shortest paths optimization,” Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 33, no. 9, pp. 1806–1819, 2011.
  • [27] Y. Xu, X. Liu, Y. Liu, and S.-C. Zhu, “Multi-view people tracking via hierarchical trajectory composition,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [28] H. Qi, Y. Xu, T. Yuan, T. Wu, and S.-C. Zhu, “Scene-centric joint parsing of cross-view videos,” in

    AAAI Conference on Artificial Intelligence (AAAI)

    , 2018.
  • [29] L. Wen, W. Li, J. Yan, Z. Lei, D. Yi, and S. Z. Li, “Multiple target tracking based on undirected hierarchical relation hypergraph,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [30] A. Dehghan, Y. Tian, P. H. Torr, and M. Shah, “Target identity-aware network flow for online multiple target tracking,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [31] X. Dong, J. Shen, D. Yu, W. Wang, J. Liu, and H. Huang, “Occlusion-aware real-time object tracking,” IEEE Transactions on Multimedia, vol. 19, no. 4, pp. 763–771, 2017.
  • [32] W. Liang, Y. Zhu, and S.-C. Zhu, “Tracking occluded objects and recovering incomplete trajectories by reasoning about containment relations and human actions,” in AAAI Conference on Artificial Intelligence (AAAI), 2018.
  • [33] Y. Zhu, C. Jiang, Y. Zhao, D. Terzopoulos, and S.-C. Zhu, “Inferring forces and learning human utilities from videos,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [34] J. Call and M. Tomasello, “A nonverbal false belief task: The performance of children and great apes,” Child development, vol. 70, no. 2, pp. 381–395, 1999.
  • [35] C. L. Baker, R. Saxe, and J. B. Tenenbaum, “Action understanding as inverse planning,” Cognition, vol. 113, no. 3, pp. 329–349, 2009.
  • [36] T. Braüner, P. Blackburn, and I. Polyanskaya, “Second-order false-belief tasks: Analysis and formalization,” in International Workshop on Logic, Language, Information, and Computation, 2016.
  • [37] Y. Wu, J. A Haque, and L. Schulz, “Children can use others’ emotional expressions to infer their knowledge and predict their behaviors in classic false belief tasks,” in Annual Meeting of the Cognitive Science Society (CogSci), 2018.
  • [38] K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-identification,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3702–3712, 2019.
  • [39] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval research logistics quarterly, vol. 2, no. 1-2, pp. 83–97, 1955.
  • [40] H. W. Kuhn, “Variants of the hungarian method for assignment problems,” Naval Research Logistics Quarterly, vol. 3, no. 4, pp. 253–258, 1956.
  • [41] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in International Conference on Computer Vision (ICCV), 2017.
  • [42] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision (ECCV), 2014.
  • [43] H. Fang, S. Xie, Y.-W. Tai, and C. Lu, “Rmpe: Regional multi-person pose estimation,” in International Conference on Computer Vision (ICCV), 2017.
  • [44] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (ICLR), 2015.