Effective Training Strategies for Deep Graph Neural Networks
Graph Neural Networks (GNNs) tend to suffer performance degradation as model depth increases, which is usually attributed in previous works to the oversmoothing problem. However, we find that although oversmoothing is a contributing factor, the main reasons for this phenomenon are training difficulty and overfitting, which we study by experimentally investigating Graph Convolutional Networks (GCNs), a representative GNN architecture. We find that training difficulty is caused by gradient vanishing and can be solved by adding residual connections. More importantly, overfitting is the major obstacle for deep GCNs and cannot be effectively solved by existing regularization techniques. Deep GCNs also suffer training instability, which slows down the training process. To address overfitting and training instability, we propose Node Normalization (NodeNorm), which normalizes each node using its own statistics in model training. The proposed NodeNorm regularizes deep GCNs by discouraging feature-wise correlation of hidden embeddings and increasing model smoothness with respect to input node features, and thus effectively reduces overfitting. Additionally, it stabilizes the training process and hence speeds up the training. Extensive experiments demonstrate that our NodeNorm method generalizes well to other GNN architectures, enabling deep GNNs to compete with and even outperform shallow ones. Code is publicly available.
READ FULL TEXT