Bidirectional Recurrent Neural Networks

What are Bidirectional Recurrent Neural Networks?

Bidirectional recurrent neural networks (BRNN) connect two hidden layers running in opposite directions to a single output, allowing them to receive information from both past and future states. This generative deep learning technique is more common in supervised learning approaches, rather than unsupervised or semi-supervised because how difficult it is to calculate a reliable probabilistic model.

How are Bidirectional Recurrent Neural Networks Trained?

BRNNs are trained with similar algorithms as RNNs, since the two directional neurons do not interact with one another. If back-propagation is necessary, some additional process is needed, since input and output layers cannot both be updated at once. 

In general training, forward and backward states are processed first in the “forward” pass, before output neurons are passed. For the backward pass, the opposite takes place; output neurons are processed first, then forward and backward states are passed next. Weights are updated only after the forward and backward passes are complete.

What’s the Difference Between BRNN’s and Recurrent Neural Networks?

Unlike standard recurrent neural networks, BRNN’s are trained to predict both the positive and negative directions of time simultaneously. BRNN’s split the neurons of a regular RNN into two directions, one for the forward states (positive time direction), and another for the backward states (negative time direction) Neither of these output states are connected to inputs of the opposite directions. By employing two time directions simultaneously, input data from the past and future of the current time frame can be used to calculate the same output. Which is the opposite of standard recurrent networks that requires an extra layer for including future information.