-
Progressive Graph Convolutional Networks for Semi-Supervised Node Classification
Graph convolutional networks have been successful in addressing graph-ba...
read it
-
Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework
In this paper we cast neural networks defined on graphs as message-passi...
read it
-
Graph Convolutional Networks: analysis, improvements and results
In the current era of neural networks and big data, higher dimensional d...
read it
-
Multi-Stage Self-Supervised Learning for Graph Convolutional Networks
Graph Convolutional Networks(GCNs) play a crucial role in graph learning...
read it
-
Graph Convolutional Networks Meet with High Dimensionality Reduction
Recently, Graph Convolutional Networks (GCNs) and their variants have be...
read it
-
Batched Sparse Matrix Multiplication for Accelerating Graph Convolutional Networks
Graph Convolutional Networks (GCNs) are recently getting much attention ...
read it
-
Keyphrase Extraction with Dynamic Graph Convolutional Networks and Diversified Inference
Keyphrase extraction (KE) aims to summarize a set of phrases that accura...
read it
Reward Propagation Using Graph Convolutional Networks
Potential-based reward shaping provides an approach for designing good reward functions, with the purpose of speeding up learning. However, automatically finding potential functions for complex environments is a difficult problem (in fact, of the same difficulty as learning a value function from scratch). We propose a new framework for learning potential functions by leveraging ideas from graph representation learning. Our approach relies on Graph Convolutional Networks which we use as a key ingredient in combination with the probabilistic inference view of reinforcement learning. More precisely, we leverage Graph Convolutional Networks to perform message passing from rewarding states. The propagated messages can then be used as potential functions for reward shaping to accelerate learning. We verify empirically that our approach can achieve considerable improvements in both small and high-dimensional control problems.
READ FULL TEXT
Comments
There are no comments yet.