Log In Sign Up

Graph Neural Networks for Human-aware Social Navigation

Autonomous navigation is a key skill for assistive and service robots. To be successful, robots have to navigate avoiding going through the personal spaces of the people surrounding them. Complying with social rules such as not getting in the middle of human-to-human and human-to-object interactions is also important. This paper suggests using Graph Neural Networks to model how inconvenient the presence of a robot would be in a particular scenario according to learned human conventions so that it can be used by path planning algorithms. To do so, we propose two ways of modelling social interactions using graphs and benchmark them with different Graph Neural Networks using the SocNav1 dataset. We achieve close-to-human performance in the dataset and argue that, in addition to promising results, the main advantage of the approach is its scalability in terms of the number of social factors that can be considered and easily embedded in code, in comparison with model-based approaches. The code used to train and test the resulting graph neural network is available in a public repository.


A Graph Neural Network to Model User Comfort in Robot Navigation

Autonomous navigation is a key skill for assistive and service robots. T...

Generation of Human-aware Navigation Maps using Graph Neural Networks

Minimising the discomfort caused by robots when navigating in social sit...

SocNav1: A Dataset to Benchmark and Learn Social Navigation Conventions

Adapting to social conventions is an unavoidable requirement for the acc...

A Toolkit to Generate Social Navigation Datasets

Social navigation datasets are necessary to assess social navigation alg...

Planning and Learning: A Review of Methods involving Path-Planning for Autonomous Vehicles

This short review aims to make the reader familiar with state-of-the-art...

Multi-camera Torso Pose Estimation using Graph Neural Networks

Estimating the location and orientation of humans is an essential skill ...

Multi-Robot Coverage and Exploration using Spatial Graph Neural Networks

The multi-robot coverage problem is an essential building block for syst...

I Introduction

Human-aware robot navigation deals with the challenge of endowing mobile social robots with the capability of considering the emotions and safety of people nearby while moving around their surroundings. There is a wide range of works studying human-aware navigation from a considerably diverse set of perspectives. Pioneering works such as [19] started taking into account the personal spaces of the people surrounding the robots, often referred to as proxemics. In addition to proxemics, human motion patterns were analysed in [11]

to estimate whether humans are willing to interact. Semantic properties were also considered in 

[6]. Although not directly applied to navigation, the relationships between humans and objects were used in the context of ambient intelligence in [2]. Proxemics and object affordances were jointly considered in [28] for navigation purposes. Two extensive surveys on human-aware navigation can be found in [22] and [3].

Despite the previously mentioned approaches being built on well-studied psychological models, they have limitations. Considering new factors programmatically (i.e., writing additional code) involves a potentially high number of coding hours, makes systems more complex, and increases the chances of including bugs. Additionally, with every new aspect to be considered for navigation, the decisions made become less explainable, which is precisely one of the main advantages of model-based approaches over data-driven ones. Besides the mentioned model scalability and explainability issues, model-based approaches have the intrinsic and rather obvious limitation that they only account for what the model explicitly considers. Given that these models are manually written by humans, they cannot account for aspects that the designers are not aware of.

Approaches leveraging machine learning have also been published. The parameters of a social force model (see 

[12]) are learned in [8] and [20]

to navigate in human-populated environments. Inverse reinforcement learning is used in 

[21] and [26] to plan navigation routes based on a list of humans in a radius. Social norms are implemented using deep reinforcement learning in [5], again, considering a set of humans. An approach modelling crowd-robot interaction and navigation control is presented in [4]. It features a two-module architecture where single interactions are modelled and then aggregated. Although its authors reported good qualitative results, the approach does not contemplate integrating additional information (e.g., relations between humans and objects, structure and size of the room). The work in [18]

tackles the same problem using Gaussian Mixture Models. It has the advantage of requiring less training data, but the approach is also limited in terms of the input information used.

All the previous works and many others not mentioned have achieved outstanding results. Some model-based approaches such as [6] or [28] can leverage structured information to take into account space affordances. Still, the data considered to make such decisions are often handcrafted features based on an arbitrary subset of the data that a robot would be able work with. There are many reasons motivating seeking for a learning-based approach not requiring feature handcrafting or manual selection. Their design is time-consuming and often requires a deep understanding of the particular domain (see discussion in [15]). Additionally, there is generally no guarantee that a particular hand-engineered set of features is close to being the best possible one. On the other hand, most end-to-end deep learning approaches have important limitations too. They require a big amount of data and computational resources that are often scarce and expensive, and they are hard to explain and manually fine-tune. Somewhere in the middle of the spectrum, we have proposals advocating not to choose between hand-engineered features or end-to-end learning. In particular, [1] proposes Graph Neural Networks (GNNs) as a means to perform learning that allows combining raw data with hand-engineered features, and most importantly, learn from structured information. The relational inductive bias of GNNs is specially well-suited to learn about structured data and the relations between different types of entities, often requiring less training data than other approaches. In this line, we argue that using GNNs for human-aware navigation makes possible integrating new social cues in a straightforward fashion, by including more data in the the graphs that they are fed.

In this paper we use different GNN models to estimate social navigation compliance, i.e., given a robot pose and a scenario where humans and objects can be interacting, estimating to what extent a robot would be disturbing the humans if it was located in such a pose. GNNs are proposed because the information that social robots can work with is not just a map and a list of people, but a more sophisticated data structure where the entities represented have different relations among them. For example, social robots can have information about who a human is talking to, where people are looking at, who is friends with who, or who is the owner of an object in the scene. Regardless of how this information is acquired, it can be naturally represented using a graph, and GNNs are a particularly well-suited and scalable machine learning approach to work with these graphs.

Ii Graph Neural Networks

Graph Neural Networks (GNNs) are a family of machine learning approaches based on neural networks that take graph-structured data as input. They allow classifying and making regressions on graphs, nodes, edges, as well as predicting link existence when working with partially observable phenomena. Except for few exceptions (

e.g.[31]) GNNs are composed by similar stacked blocks/layers operating on a graph whose structure remains static but the features associated to its nodes are updated in every layer of the network (see Fig. 1).

Fig. 1: A basic GNN block/layer. A GNN is usually composed of several stacked GNN layers. Higher level features are learnt in the deeper layers, to that the output of any of the nodes in the last layer can used for classification or regression purposes.

As a consequence, the features associated to the nodes of the graph in each layer become more abstract and are influenced by a wider context as layers go deeper. The features in the nodes of the last layer are frequently used to perform the final classification or regression.

The first published efforts on applying neural networks to graphs date back to [25]. GNNs were further studied and formalised in [10] and [23]. However, it was with the appearance of Gated Graph Neural Networks (GG-NNs, [16]) and especially Graph Convolutional Networks (GCNs, [13]) that GNNs gained traction. The work presented in [1] reviewed and unified the notation used in the GNNs existing to the date.

Graph Convolutional Networks (GCN) [13] are one of the most common GNN blocks. Because of its simplicity, we build on the GCN block to provide the reader with an intuition of how GNNs work in general. Following the notation proposed in [1], GCN blocks operate over a graph , where is a set of nodes, being

the feature vector of node

and the number of vertices in the graph. is a set of edges , where and are the source and destination indices of edge and is the number of edges in the graph. Each GCN layer generates an updated representation for each node using two functions:

For every node , the first function () aggregates the feature vectors of other nodes with an edge towards it and generates a temporary aggregated feature which is used by the second function. In a second pass, the function is used to generate updated

feature vectors from the aggregated feature vectors using a neural network (usually a multi-layer perceptron, but the framework does not make any assumption on this). Such a learnable function is the same for all the nodes. By stacking several blocks where features are aggregated and updated, the feature vectors can carry information from nodes far away in the graph and convey higher level features that can be finally used for classification or regressions.

Several means of improving GCNs have been proposed. Relational Graph Convolutional Networks (RGCNs [24]) extends GCNs by considering different types of edges separately and applies the resulting model to vertex classification and link prediction. Graph Attention Networks (GATs [29]) extend GCNs by adding self-attention mechanisms (see [27]) and applies the resulting model to vertex classification. For a more detailed review of GNNs and the generalised framework, please refer to [1].

Iii Formalisation of the problem

The aim of this work is to analyse the scope of GNNs in the field of human-aware social navigation. Our study has been set up using the SocNav1 dataset (see [17]). It contains scenes with a robot in a room, a number of objects and a number of people that can potentially be interacting with other objects or people. Each scene is labelled with a value from to depending on to what extent the subjects that labelled the scenarios considered that the robot is disturbing the people in the scene. The dataset provides 16336 labelled samples to be used for training purposes, 556 scenarios as the development dataset and additional 556 for final testing purposes.

As previously noted, GNNs are a flexible framework that allows working somewhere in the middle of end-to-end and feature engineered learning. Developers can use as many data features as desired and are free to structure the input graph data as they please. The only limitations are those of the particular GNN layer blocks used. In particular, while GCN and GAT do not support labelled edges, RGCN and GG-NN do. To account for this limitation, two representations were used in the experiments, depending on the GNN block to be tested: one without edge labels and one with them.

The first version of the scene-to-graph transformation used to represent the scenarios does not use labelled edges. It uses 6 node types (the features associated to each of the types are detailed later in this section):

  • robot: The dataset only includes one robot in each scene, so there is just one robot symbol in each of the graphs. However, GNNs do not have such restriction.

  • wall: A node for each of the segments defining the room. They are connected to the room node.

  • room: Used to represent the room where the robot is located. It is connected to the robot.

  • object: A node for each object in the scene.

  • human: A node for each human. Humans might be interacting with objects or other humans.

  • interaction: An interaction node is created for every human-to-human or human-to-object interaction.

Figure 2 depicts two areas of a scenario where four humans are shown in a room with several objects. Two of the humans are interacting with each other, another human is interacting with an object, and the remaining human is not engaging in interaction with any human or object. The structure of the resulting non-labelled graph is shown in Fig.(a)a.

(a) An area of a scenario where two humans interacting in a room are depicted.
(b) Heat map of the social compliance estimation for the area shown in Fig. (a)a in the different positions in the environment.
(c) An area of the same scenario with two humans. The human on the left is not engaged in any interaction. The human on the right is interacting with the objects in front of her.
(d) Heat map of the social compliance estimation for the area shown in Fig. (c)c in the different positions in the environment.
Fig. 2: Different areas of a scenario where social interactions are being held and their corresponding estimated heat map of “social inconvenience”.

The features used for human and object nodes are: distance, the relative angle from the robot’s point of view, and its orientation, from the robot’s point of view too. For room symbols the features are: the distance to the closest human and the number of humans. For the wall segments and the interaction symbols, the features are the distance and orientation from the robot’s frame of reference. For wall segments, the position is the centre of the segment and the orientation is the tangent. For interactions, the position is the midpoint between the interacting symbols, and the orientation is the tangent of the line connecting the endpoints. Features related to distances are expressed in meters, whereas those related to angles are actually expressed as two different numbers, and

. The final features of the nodes are built by concatenating the one-hot encoding that determines the type of the symbol and the features for the different node types. It is worth noting that by building feature vectors this way, their size increases with every new type. This limitation is currently being studied by the GNN scientific community.

For the GNN blocks that can work with labelled edges, a slightly different version of the scene-to-graph transformation is used. The first difference is that in this version of the scenario-to-graph model there are no interaction nodes. The elements interacting are linked to each other directly. Robot, room, wall, human and object nodes are attributed with the same features as in the previous model. The second difference is related to the labelling of the edges. In this domain, the semantics of the edges can be inferred from the types of the nodes being connected. For example, wall and room nodes are always connected by the same kind of relation “composes”. Similarly, humans and object nodes are always connected by the relation “interacts_with_human”. The same holds the other way around: “composes” relations only occur between wall and room nodes, and “interacts_with_human” relations only occur with humans and object nodes. Therefore, for simplicity, the label used for the edges is the concatenation of the types involved. The structure of the resulting labelled graph for the scenario depicted in Fig.2 is shown in Fig.(b)b.

(a) Graph without labelled edges.














(b) Graph with labelled edges.
Fig. 3: Examples of how the scene-to-graph transformation work based on the scenario depicted in Fig.2.

Because all nodes are connected to the robot, the GNNs were trained to perform the regression on the feature vector of the robot node in the last layer.

Using the previously mentioned dataset and the two proposed scenario-to-graph transformations, different architectures using different GNN blocks were compared.

Iv Experimental results

The four GNN blocks used in the experiments were:

  • Graph Convolutional Networks (GCN) [13].

  • Gated Graph Neural Networks (GG-NN) [16].

  • Relational Graph Convolutional Networks (RGCNs) [24].

  • Graph Attention Networks(GAT) [29]. Using 1 and 2 layers for the neural network implementing (GAT and GAT2 respectively in table I).

If available, each of these GNN blocks was benchmarked using Deep Graph Library (DGL) [30]

and PyTorch-Geometric (PyG) 

[9]. In addition to using several layers of the same GNN building blocks, alternative architectures combining RGCN and GAT layers were also tested:

  1. A sequential combination of RGCN layers followed by GAT layers with the same number of hidden units (alternative 8 in table I).

  2. An interleaved combination of RGCN and GAT layers with the same number of hidden units (alternative 9 in table I).

  3. A sequential combination of RGCN layers followed by GAT layers with a linearly decreasing number of hidden units (alternative 10 in table I).

As a result, 10 framework-architecture combinations were benchmarked. Table I

describes them and provides their corresponding performance on the development dataset. To benchmark the different architectures, 5000 training sessions were launched using the SocNav1 training dataset and evaluated using the SocNav1 development dataset. The hyperparameters were randomly sampled from the range values shown in Table 


Alt. # Framework Network Loss
architecture (MSE)
1 DGL GCN 0.02289
2 DGL GAT 0.01701
3 DGL GAT2 0.01740
4 PyG GCN 0.02243
5 PyG GAT 0.01731
6 PyG RGCN 0.01871
7 PyG GG-NN 0.02700
8 PyG RGCNGAT 1 0.02089
9 PyG RGCNGAT 2 0.02103
10 PyG RGCNGAT 3 0.01995
TABLE I: A description of the different framework/architecture combinations and the experimental results obtained from their benchmark for the SocNav1 dataset.
Hyperparameter Min Max
epochs 1000
patience 5
batch size 100 1500
hidden units 50 320
attention heads 2 9
learning rate 1e-6 1e-4
weight decay 0.0 1e-6
layers 2 8
dropout 0.0 1e-6
alpha 0.1 0.3
TABLE II: Ranges of hyperparameter values sampled. The ’attention heads’ parameter is only applicable to Graph Attention Network blocks.

The results obtained (see table I

) suggest that, for the dataset and the framework/architecture combinations benchmarked, DGL/GAT delivered the best results, with a loss of 0.01701 for the development dataset. The parameters used by the DGL/GAT combination were: batch size: 273, number of hidden units: 129, number of attention heads: 2, number of attention heads in the last layer: 3, learning rate: 5e-05, weight decay regularisation: 1e-05, number of layers: 4, no dropout, alpha parameter of the ReLU non-linearity 0.2114. After selecting the best set of hyperparameters, the network was compared with a third

test dataset, obtaining an MSE of 0.03173. Figures (b)b and (d)d provide an intuition of the output of the network for the scenarios depicted in figures (a)a and (c)c considering all the different positions of the robot in the environment when looking along the axis.

It is worth noting that, due to the subjective nature of the labels in the dataset (human feelings are utterly subjective), there is some level of disagreement even among humans. To compare the performance of the network with human performance, we asked 5 subjects to label all the scenarios of the development dataset. The mean MSE obtained for the different subjects was 0.02929. The subjects achieved an MSE of 0.01878 if their decisions were averaged before computing the error. Overall, the results suggest that the network performs very close to human accuracy in the test set (0.03173 versus 0.02929). Figure 4 shows an histogram comparing the error made by the GNN-based regression in the test set.

Fig. 4: Histogram of the absolute error in the test dataset for the network performing best in the development dataset.

Most algorithms presented in section I deal with modelling human intimate, personal, social and interaction spaces instead of social inconvenience, which seems to be a more general term. Keeping that in mind, the algorithm proposed in [28] was tested against the test dataset and got a MSE of 0.12965. The relative bad performance can be explained by the fact that other algorithms do not take into account walls and that their actual goal is to model personal spaces instead of feelings in general.

Regarding the effect of the presence of humans, we can see from Fig. (d)d

that the learnt function is slightly skewed to the front of the humans, but not as much as modelled in other works such as 

[14] or [28]

. One of the possible reasons why the “personal space” is close to being circular is the fact that, in the dataset, humans appear to be standing still. It is still yet to be studied, probably using more detailed and realistic datasets, how would the personal space look like if humans were moving.

The results obtained using GNN blocks supporting edge labels were inferior to those obtained using GAT, which does not support edge labels. Two reasons might be the cause of this phenomena: a) as mentioned in section III the labels edges can be inferred from the types of the nodes, so that information is to some extent redundant; b) the inductive bias of GATs is strong and appropriate for the problem at hand. This does not mean that the same results would be obtained in other problems where the label of the edges cannot be inferred.

V Conclusions

To our knowledge, this paper presented the first graph neural network for human-aware navigation. The scene-to-graph transformation model and the graph neural network developed as a result of the work presented in this paper achieved a performance comparable to that of humans. Even though the results achieved are remarkable, the key fact is that this approach allows to include more relational information. This will allow to include more sources of information in our decisions without a big impact in the development.

There is room for improvement, particularly related to: a) personalisation (different people generally feel different about robots), and b) movement (the inconvenience of the presence of a robot is probably influenced by the movement of the people and the robot). Still, we include interactions and walls, features which are seldom considered in other works. As far as we know, interactions have only been considered in [28] and [7].

The code to test the resulting GNN has been published in a public repository as open-source software111, as well as the code implementing the scene-to-graph transformation and the code train the models suggested.


  • [1] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, C. Gulcehre, F. Song, A. Ballard, J. Gilmer, G. Dahl, A. Vaswani, K. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. Botvinick, O. Vinyals, Y. Li, and R. Pascanu (2018) Relational inductive biases, deep learning, and graph networks. arXiv e-prints, pp. 1–40. External Links: Document, 1806.01261, ISBN 0885-6230, ISSN 0031-1820, Link Cited by: §I, §II, §II, §II.
  • [2] M. Bhatt and F. Dylla (2010) A Qualitative Model of Dynamic Scene Analysis and Interpretation in Ambient Intelligence Systems. International Journal of Robotics and Automation 24 (3), pp. 1–18. External Links: Document, ISSN 1925-7090 Cited by: §I.
  • [3] K. Charalampous, I. Kostavelis, and A. Gasteratos (2017) Recent trends in social aware robot navigation: A survey. Vol. 93, Elsevier B.V.. External Links: Document, ISSN 09218890 Cited by: §I.
  • [4] C. Chen, Y. Liu, S. Kreiss, and A. Alahi (2019) Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning. In International Conference on Robotics and Automation (ICRA), pp. 6015–6022. External Links: 1809.08835, Link Cited by: §I.
  • [5] Y. F. Chen, M. Everett, M. Liu, and J. P. How (2017) Socially aware motion planning with deep reinforcement learning. IEEE International Conference on Intelligent Robots and Systems 2017-Septe, pp. 1343–1350. External Links: Document, ISBN 9781538626825, ISSN 21530866 Cited by: §I.
  • [6] D. Cosley, J. Baxter, S. Lee, B. Alson, S. Nomura, P. Adams, C. Sarabu, and G. Gay (2009) A Tag in the Hand: Supporting Semantic, Social, and Spatial Navigation in Museums. Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI’09), pp. 1953–1962. External Links: Document, ISBN 9781605582467 Cited by: §I, §I.
  • [7] A. Cruz-maya (2019) Enabling Socially Competent navigation through incorporating HRI. arXiv e-prints, pp. 9–12. External Links: arXiv:1904.09116v1, ISBN 9781450399999 Cited by: §V.
  • [8] G. Ferrer, A. Garrell, and A. Sanfeliu (2013) Social-aware robot navigation in urban environments. 2013 European Conference on Mobile Robots, ECMR 2013 - Conference Proceedings, pp. 331–336. External Links: Document, ISBN 9781479902637 Cited by: §I.
  • [9] M. Fey and J. E. Lenssen (2019) Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §IV.
  • [10] M. Gori, G. Monfardini, and F. Scarselli (2005) A new Model for Learning in Graph domains. Proceedings of the International Joint Conference on Neural Networks 2, pp. 729–734. External Links: Document, ISBN 0780390482 Cited by: §II.
  • [11] S. T. Hansen, M. Svenstrup, H. J. Andersen, and T. Bak (2009) Adaptive human aware navigation based on motion pattern analysis. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, pp. 927–932. External Links: Document, ISBN 9781424450817 Cited by: §I.
  • [12] D. Helbing and P. Molnár (1995) Social force model for pedestrian dynamics. Physical Review E 51 (5), pp. 4282–4286. External Links: Document, ISSN 1063651X Cited by: §I.
  • [13] T. N. Kipf and M. Welling (2016) Semi-Supervised Classification with Graph Convolutional Networks. arXiv e-prints, pp. 1–14. External Links: 1609.02907, Link Cited by: §II, §II, 1st item.
  • [14] R. Kirby, R. Simmons, and J. Forlizzi (2009) Companion: a constraint-optimizing method for person-acceptable navigation. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, pp. 607–612. Cited by: §IV.
  • [15] Y. Lecun, Y. Bengio, and G. Hinton (2015) Deep learning. Nature 521 (7553), pp. 436–444. External Links: Document, ISSN 14764687 Cited by: §I.
  • [16] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel (2015) Gated Graph Sequence Neural Networks. arXiv e-prints (1), pp. 1–20. External Links: 1511.05493, Link Cited by: §II, 2nd item.
  • [17] L. J. Manso, P. Nunez, L. V. Calderita, D. R. Faria, and P. Bachiller (2019-09) SocNav1: A Dataset to Benchmark and Learn Social Navigation Conventions. arXiv e-prints, pp. arXiv:1909.02993. External Links: 1909.02993 Cited by: §III.
  • [18] G. S. Martins, R. P. Rocha, F. J. Pais, and P. Menezes (2019) ClusterNav: learning-based robust navigation operating in cluttered environments. In 2019 International Conference on Robotics and Automation (ICRA), pp. 9624–9630. Cited by: §I.
  • [19] E. Pacchierotti, H. I. Christensen, and P. Jensfelt (2005) Human-robot embodied interaction in hallway settings: A pilot user study. In IEEE International Workshop on Robot and Human Interactive Communication, Vol. 2005, pp. 164–171. External Links: Document, ISBN 0780392752 Cited by: §I.
  • [20] P. Patompak, S. Jeong, I. Nilkhamhang, and N. Y. Chong (2019) Learning Proxemics for Personalized Human?Robot Social Interaction. International Journal of Social Robotics. External Links: Document, ISSN 18754805 Cited by: §I.
  • [21] R. Ramon-Vigo, N. Perez-Higueras, F. Caballero, and L. Merino (2014) Transferring human navigation behaviors into a robot local planner. IEEE RO-MAN 2014 - 23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions, pp. 774–779. External Links: Document, ISBN 9781479967636 Cited by: §I.
  • [22] J. Rios-Martinez, A. Spalanzani, and C. Laugier (2015) From Proxemics Theory to Socially-Aware Navigation: A Survey. International Journal of Social Robotics 7 (2), pp. 137–153. External Links: Document, ISSN 18754805 Cited by: §I.
  • [23] F. Scarselli, M. Gori, A. Tsoi, M. Hagenbuchner, and G. Monfardini (2009) The Graph Neural Network Model. IEEE Transactions on Neural Networks 20 (1), pp. 61–80. External Links: Document Cited by: §II.
  • [24] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling (2018) Modeling Relational Data with Graph Convolutional Networks.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

    10843 LNCS (1), pp. 593–607.
    External Links: Document, arXiv:1703.06103v4, ISBN 9783319934167, ISSN 16113349 Cited by: §II, 3rd item.
  • [25] A. Sperduti and A. Starita (1997) Supervised Neural Networks for the Classification of Structures. IEEE Transactions on Neural Networks 8 (3), pp. 1–22. Cited by: §II.
  • [26] D. Vasquez, B. Okal, and K. O. Arras (2014) Inverse Reinforcement Learning algorithms and features for robot navigation in crowds: An experimental comparison. IEEE International Conference on Intelligent Robots and Systems (Iros), pp. 1341–1346. External Links: Document, ISBN 9781479969340, ISSN 21530866 Cited by: §I.
  • [27] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §II.
  • [28] A. Vega, L. J. Manso, D. G. Macharet, P. Bustos, and P. Núñez (2019) Socially aware robot navigation system in human-populated and interactive environments based on an adaptive spatial density function and space affordances. Pattern Recognition Letters 118, pp. 72–84. External Links: Document, ISSN 01678655 Cited by: §I, §I, §IV, §IV, §V.
  • [29] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio (2018) Graph Attention Networks. In Proceedings of the International Conference on Learning Representations 2018, pp. 1–11. External Links: arXiv:1710.10903v3, ISBN 1710.10903v3, Link Cited by: §II, 4th item.
  • [30] M. Wang, L. Yu, D. Zheng, Q. Gan, Y. Gai, Z. Ye, M. Li, J. Zhou, Q. Huang, C. Ma, Z. Huang, Q. Guo, H. Zhang, H. Lin, J. Zhao, J. Li, A. Smola, and Z. Zhang (2019) Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs. ICLR 2019 Workshop on Representation Learning on Graphs and Manifolds (RLGM), pp. 1–7. Cited by: §IV.
  • [31] Z. Ying, J. You, C. Morris, X. Ren, W. Hamilton, and J. Leskovec (2018) Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, pp. 4800–4810. Cited by: §II.