Log In Sign Up

Effectiveness of Deep Networks in NLP using BiDAF as an example architecture

by   Soumyendu Sarkar, et al.

Question Answering with NLP has progressed through the evolution of advanced model architectures like BERT and BiDAF and earlier word, character, and context-based embeddings. As BERT has leapfrogged the accuracy of models, an element of the next frontier can be the introduction of deep networks and an effective way to train them. In this context, I explored the effectiveness of deep networks focussing on the model encoder layer of BiDAF. BiDAF with its heterogeneous layers provides the opportunity not only to explore the effectiveness of deep networks but also to evaluate whether the refinements made in lower layers are additive to the refinements made in the upper layers of the model architecture. I believe the next greatest model in NLP will in fact fold in a solid language modeling like BERT with a composite architecture which will bring in refinements in addition to generic language modeling and will have a more extensive layered architecture. I experimented with the Bypass network, Residual Highway network, and DenseNet architectures. In addition, I evaluated the effectiveness of ensembling the last few layers of the network. I also studied the difference character embeddings make in adding them to the word embeddings, and whether the effects are additive with deep networks. My studies indicate that deep networks are in fact effective in giving a boost. Also, the refinements in the lower layers like embeddings are passed on additively to the gains made through deep networks.


BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance

Pretraining deep contextualized representations using an unsupervised la...

Dissecting Contextual Word Embeddings: Architecture and Representation

Contextual word representations derived from pre-trained bidirectional l...

What is the optimal depth for deep-unfolding architectures at deployment?

Recently, many iterative algorithms proposed for various applications su...

ReZero is All You Need: Fast Convergence at Large Depth

Deep networks have enabled significant performance gains across domains,...

Contextual Aware Joint Probability Model Towards Question Answering System

In this paper, we address the question answering challenge with the SQuA...

NorBERT: NetwOrk Representations through BERT for Network Analysis and Management

Deep neural network models have been very successfully applied to Natura...

Predefined Sparseness in Recurrent Sequence Models

Inducing sparseness while training neural networks has been shown to yie...