Explanations as Features: LLM-Based Features for Text-Attributed Graphs

05/31/2023
by   Xiaoxin He, et al.
7

Representation learning on text-attributed graphs (TAGs) has become a critical research problem in recent years. A typical example of a TAG is a paper citation graph, where the text of each paper serves as node attributes. Most graph neural network (GNN) pipelines handle these text attributes by transforming them into shallow or hand-crafted features, such as skip-gram or bag-of-words features. Recent efforts have focused on enhancing these pipelines with language models. With the advent of powerful large language models (LLMs) such as GPT, which demonstrate an ability to reason and to utilize general knowledge, there is a growing need for techniques which combine the textual modelling abilities of LLMs with the structural learning capabilities of GNNs. Hence, in this work, we focus on leveraging LLMs to capture textual information as features, which can be used to boost GNN performance on downstream tasks. A key innovation is our use of explanations as features: we prompt an LLM to perform zero-shot classification and to provide textual explanations for its decisions, and find that the resulting explanations can be transformed into useful and informative features to augment downstream GNNs. Through experiments we show that our enriched features improve the performance of a variety of GNN models across different datasets. Notably, we achieve top-1 performance on by a significant margin over the closest baseline even with 2.88× lower computation time, as well as top-1 performance on TAG versions of the widely used and benchmarks [Our codes and datasets are available at: <https://github.com/XiaoxinHe/TAPE>].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2023

Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs

Learning on Graphs has attracted immense attention due to its wide real-...
research
08/03/2023

SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning

Textual graphs (TGs) are graphs whose nodes correspond to text (sentence...
research
09/20/2023

Improving Article Classification with Edge-Heterogeneous Graph Neural Networks

Classifying research output into context-specific label taxonomies is a ...
research
10/26/2022

Learning on Large-scale Text-attributed Graphs via Variational Inference

This paper studies learning on text-attributed graphs (TAGs), where each...
research
10/08/2022

Weisfeiler–Lehman goes Dynamic: An Analysis of the Expressive Power of Graph Neural Networks for Attributed and Dynamic Graphs

Graph Neural Networks (GNNs) are a large class of relational models for ...

Please sign up or login with your details

Forgot password? Click here to reset