Learning Document Embeddings With CNNs
We propose a new model for unsupervised document embedding. Existing approaches either require complex inference or use recurrent neural networks that are difficult to parallelize. We take a different route and use recent advances in language modelling to develop a convolutional neural network embedding model. This allows us to train deeper architectures that are fully parallelizable. Stacking layers together increases the receptive field allowing each successive layer to model increasingly longer range semantic dependencies within the document. Empirically, we demonstrate superior results on two publicly available benchmarks.
READ FULL TEXT