Optimization and Interpretability of Graph Attention Networks for Small Sparse Graph Structures in Automotive Applications

05/25/2023
by   Marion Neumeier, et al.
0

For automotive applications, the Graph Attention Network (GAT) is a prominently used architecture to include relational information of a traffic scenario during feature embedding. As shown in this work, however, one of the most popular GAT realizations, namely GATv2, has potential pitfalls that hinder an optimal parameter learning. Especially for small and sparse graph structures a proper optimization is problematic. To surpass limitations, this work proposes architectural modifications of GATv2. In controlled experiments, it is shown that the proposed model adaptions improve prediction performance in a node-level regression task and make it more robust to parameter initialization. This work aims for a better understanding of the attention mechanism and analyzes its interpretability of identifying causal importance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2019

CoulGAT: An Experiment on Interpretability of Graph Attention Networks

We present an attention mechanism inspired from definition of screened C...
research
05/28/2019

Towards Interpretable Sparse Graph Representation Learning with Laplacian Pooling

Recent work in graph neural networks (GNNs) has lead to improvements in ...
research
09/05/2022

Spiking GATs: Learning Graph Attentions via Spiking Neural Network

Graph Attention Networks (GATs) have been intensively studied and widely...
research
10/20/2022

Causally-guided Regularization of Graph Attention Improves Generalizability

However, the inferred attentions are vulnerable to spurious correlations...
research
04/11/2019

Relational Graph Attention Networks

We investigate Relational Graph Attention Networks, a class of models th...
research
10/22/2019

Kernel Graph Attention Network for Fact Verification

This paper presents Kernel Graph Attention Network (KGAT), which conduct...
research
06/02/2021

Is Sparse Attention more Interpretable?

Sparse attention has been claimed to increase model interpretability und...

Please sign up or login with your details

Forgot password? Click here to reset