Learning to Make Predictions on Graphs with Autoencoders
We examine two fundamental tasks associated with graph representation learning: link prediction and semi-supervised node classification. We present a densely connected autoencoder architecture capable of learning a joint representation of both local graph structure and available external node features for the multi-task learning of link prediction and node classification. To the best of our knowledge, this is the first architecture that can be efficiently trained end-to-end in a single learning stage to simultaneously perform link prediction and node classification. We provide comprehensive empirical evaluation of our models on a range of challenging benchmark graph-structured datasets, and demonstrate significant improvement in accuracy over related methods for graph representation learning. Code implementation is available at https://github.com/vuptran/graph-representation-learning
READ FULL TEXT