Gating units have become a key component in many types of artificial neural network (ANN) models. These units yield soft 0-1 valued outputs that are used to scale signals from other parts of the network. In recurrent neural networks (RNNs) a key issue is solving the vanishing gradient problem in training which causes standard RNNs to find it difficult to learn long-term information. Gated RNNs such as the long short-term memory (LSTM) model 
define explicit memory cells where updates to the memory cell values are controlled by two gating units and updating the hidden state value uses a further gate. The gated recurrent unit (GRU) is an RNN that uses two gates.
Recently, highway connections have been proposed to enable a feed-forward or a recurrent layer to have an extra nonlinearity by combining its input and output values via gating units [5, 6, 7]. The highway idea has also been applied to connect the memory cells of neighbouring LSTM layers . Furthermore, gating is also useful for convolutional layers [9, 10]. A quasi-RNN uses gates to integrate different time step outputs from a layer shared across time, which can be viewed as temporal convolutional model as the shared layer serves as a time-invariant filter . All models discussed above have been applied to acoustic modelling for speech recognition [11, 12, 13, 14, 15, 16, 17, 18, 19, 20].
Normally a gating unit is defined as a sublayer that outputs a “gating vector” of soft 0-1 values by operating on e.g. current input values or those from previous layers, with full weight matrices. This gating vector is often applied to a “candidate vector” that would, for instance in the case of an LSTM, be used to update the memory cell values. Since the calculation of the gating vectors often has a similar functional form to that used to find the candidate vectors, the overall number of parameters and computational complexity of gated models is high[3, 9, 5]. This can be severe when models use several different gating units.
In this paper we propose an alternative type of unit for gating termed a semi-tied unit (STU), which aims at implementing a similar function to the traditional gating unit in a more efficient way. As its name suggests, the key idea in STU is to share parameters to save computation while also adding some untied parameters so that gating units can learn distinct functions. This paper studies the most commonly used gated models, LSTMs and highway networks, which have each of their units implemented based on full weight matrices. In order to reduce the number of matrix multiplications, the STUs share the weights and biases among all the gating and candidate units in the same layer. Meanwhile, additional untied parameter vectors are introduced as component-wise adaptive scaling factors through parameterised activation functions [21, 22], which allows the STUs to generate distinct gating and candidate vectors. Experimental results found using STUs in both LSTM and highway network resulted in similar WERs to those based on traditional gating units, while being significantly more efficient.
The rest of the paper is organised as follows. Section 2 reviews the gating mechanism along with LSTMs and highway networks. STUs based on parameterised activation functions are described in Section 3. The experimental setup and results are given in Sections 4 and 5, which is followed by conclusions.
2 Gating Mechanism
Analogous to an array of logic gate in electronics, an ANN gating unit converts its vector input into a 0-1 valued gating vector. For an LSTM layer at time , the input, forget, and output gating vectors , , and are computed by
where and are the input and hidden state values; and are weight matrices; and
are the bias vector and diagonal “peephole” matrix;is component-wise multiplication; is the component of the
vector sigmoid functionwith input activation vector , and is the component of . Given the component of the vector hyperbolic tangent function as , the gating vectors are used to generate based on the previous memory cell value and the current candidate vector by
which manipulates the information flow to simulate the human long short-term memory mechanism, and helps solve the gradient vanishing problem .
The gating idea can also be used to attenuate the information loss in feedforward layers to allow the training of very deep models . A highway network refers to a feedforward model with a stack of highway layers , with each of them defined as
where is the output of the layer, is the candidate vector, and are the gating vectors of the transform and carry gates, and is an activation function. For recurrent models, the feedforward highway layers can be located in-between the recurrent layers, which results in the recurrent highway network . Furthermore, can be replaced by to save one gating unit in each layer, and Eqn. (10) becomes
This idea has also been applied to GRUs and quasi-RNNs by modifying Eqn. (6) in the same way [4, 10]. However, since Eqn. (6) is found to work better for highway networks , it is used throughout this paper.
3 Semi-tied Units
From Eqns. (1) – (4) and Eqns. (7) – (9), only a quarter or one third of the parameters (and calculations) are used to generate the candidate vectors in an LSTM and highway layers respectively, while the rest are associated with gating. The efficiency could be improved if there exists a shared “virtual unit” which distinguishes the gating and candidate units by cheaper operations than matrix multiplications. This is reasonable since the units have the same input and functional form. Based on this assumption, the STU is proposed that represents the “virtual unit” by parameters that are tied across all gating and candidate units, and modelling the difference between the “virtual unit” and every other unit by some extra untied parameters.
3.1 Parameterised Activation Function for STUs
In LSTMs and highway networks, since weight matrix multiplications take the most computation and storage cost, they are tied to form the “virtual unit”, and the bias vectors are also tied. The type of the untied parameters is another important choice in an STU as they model the differences between the units. This paper uses additional linear factors to scale the output values from the “virtual unit” for this purpose, which is very efficient as it involves only component-wise operations. It is natural to associate such scaling factors with the activation functions that leads to the use of the parameterised activation functions proposed in . The parameterised sigmoid function with additional parameter vectors and is denoted as and defined by
where and associates an independent parameter for every output node to scale its output and input values. Note that the scaling by
can mean that the range of the gating vector values is no longer constrained by 0-1, which can be seen as a generalisation of the original gating mechanism. In order to use STUs for LSTMs and rectified linear unit (ReLU) highway networks, thetanh and ReLU functions are also parameterised as
where . Here and still refer to the output and input value scaling vectors for and . Other types of parameterised activation functions have also been investigated for both conventional modelling [23, 24, 25, 26] and speaker adaptation [27, 28, 29, 30].
3.2 STUs for LSTMs and Highway Networks
3.2.1 STU based LSTMs ()
As discussed before, the weights and biases are tied across all gating and candidate units when using STUs. The shared part, or the “virtual unit”, produces the common values by
Hence the weight matrices are tied to and respectively; the bias vectors and diagonal “peephole” matrices are tied to and . Let and be the sizes of and , STUs reduced the computation and storage complexities from to . Compared to the projected LSTM (LSTMP), whose recurrent matrices are factorised by a projection matrix to , , , and , is even more efficient than LSTMP with . The LSTMP also falls into the STU framework, by defining .
3.2.2 STU based Highway Network ()
Similar to the case, the “virtual unit” output value in is also shared among all gating units and the candidate unit. By tying both weights and biases, we have
which is equal to the shared input activation values.
This ties all weight matrices and bias vectors together, and reduces the calculation and storage complexities from to . If is ReLU, Eqn. (9) is then replaced by
where is the ReLU output scaling factor vector. Note in both and , almost all parameters and calculations are used in candidate vector generation.
3.3 Training STUs
3.3.1 Training Activation Function Parameters
To train STUs by error back propagation, the derivatives of the parameterised activation functions w.r.t. to the function parameters and input activation values are required . Let , , and be the components of , , and , then
Similarly, for , we have
3.3.2 Normalising the Gradients of the Tied Parameters
When training the shared parameters in the “virtual unit” of an STU, e.g. , there are
for . Note that , , and are calculated in the same way. In addition, to use the same hyper parameters (e.g. learning rate) for both tied and untied parameters in training, the gradients of the unfolded LSTM layer parameters are further divided by the number of unfolded steps  (here 20 unfolded steps are used).
4 Experimental Setup
The proposed and models were evaluated on multi-genre broadcast (MGB) data from the MGB3 speech recognition challenge task [31, 32]. The audio is from BBC TV programmes covering a wide range of genres. A 275 hour (275h) full training set was selected from 750 episodes where the training labels were from the sub-titles with a phone matched error rate compared to the lightly supervised output . A 55 hour (55h) subset was uniformly sampled at the utterance level from the 275h set. A 63k word vocabulary 
was used with a trigram word language model (LM) estimated from both the training labels and an extra 640 million word MGB subtitle archive. The test set,dev17b, contains 5.55 hours of audio data and 5,201 manually segmented utterances from 14 episodes of 13 shows. System outputs were evaluated with 1-best Viterbi decoding as well as confusion network decoding (CN) [35, 36].
coefficients, which were normalised at the utterance level for mean and at the show-segment level for variance. All models were trained as hybrid system acoustic models by stochastic gradient descent based on the cross-entropy criterion with the data shuffled at the frame-level in a 800 sample minibatch [40, 41, 42]
. About 6k/9k decision tree clustered triphone tied-states along with appropriate training alignments were used for the 55h/275h training sets. The NewBoblearning rate scheduler [39, 22] was used for all models with the setup from our previous MGB systems . Weight decay factors were carefully tuned to maximise the performance of each system. More details about the LSTM implementation and training configuration can be found in [43, 44].
5 Experimental Results
5.1 Experiments on 55 Hour Data Set
5.1.1 Lstm Experiments
The experiments started by investigating different STU settings with LSTMs. All 55h LSTMs had one feedforward hidden layer placed between the LSTM layers and output layer with . Two baseline systems with one LSTM layer (1L), L and L, were trained, where L was a standard LSTM and L was an LSTMP with the projection size . LSTM systems with different settings were constructed: L followed those in Section 3.2.1; L used fixed ; L had untied “peephole” matrices; L had untied bias vectors. From Table 1, L had slightly higher word error rates (WER) than L, which showed generalising gating to learn was useful. L outperformed L due to the use of distinct “peepholes”, but this also increased the training difficulty and was not used. L
used untied bias vectors and was found to only improve the convergence speed by producing better criterion values in the early epochs in training and similar values in the end.
To understand the STUs in L, the units used in Eqn. (5): the input gate, forget gate and the candidate unit were shown in Fig. 1. By ignoring and taking , , , and can be approximately evaluated by . From Fig. 1, it can be seen that the input gate and forget gate follow roughly opposite trends, which coincides with replacing by in Eqn. (11). The candidate vector values lie in between of those from the other two units, and are more similar to since they are multiplied to function together in Eqn. (5). These showed that STUs can still learn reasonable gating functions with the additional untied parameters.
Comparing L with L and L, while producing similar WERs, L, L, and L had 0.29 million (M), 1.16M, and 0.79M parameters in the LSTM layer. Hence, the use of STUs can reduce calculation and storage by a factor of four without increasing the WER. The LSTM systems with two stacked recurrent layers (2L) were also investigated, and the STU based system L still generated similar WERs to LSTMP L.
5.1.2 Highway Experiments
STUs were also used for both sigmoid and ReLU highway networks with , and the results were listed in Table 2. For sigmoid models, the 7 layer (7L) highway network S had a 4.2% relative WER reduction (WERR) over the 7L deep neural network (DNN) S. S and S had 4.83M and 1.61M hidden layer parameters. The Highway model, S, had almost the same WERs as the standard highway network and the same number of parameters as a DNN. The use of STUs retained the WER reduction obtained from highway connections while increasing the number of hidden layer parameters by only 1.1% rather than by 200% with the standard highway model. The 15 layer (15L) DNN S gave a 3.9% WERR over the 7L DNN S. Both standard and STU based highway systems, S and S, resulted in WERRs of 3.5% and 4.1% over S, while using 6.51M and 0.04M extra parameters respectively.
The same experiments were also repeated for the ReLU systems. The 7L DNN baseline R gave a 4.7% WERR over S. 2.4% and 3.3% WERRs were obtained by using standard and STU based highway connections. The 15L ReLU DNN, R, outperformed R by a 2.4% WERR, and its relevant highway models R and R both outperformed R by about 2% WERR. This showed the STU idea was also applicable to ReLU. Note that ReLU systems obtained smaller improvements from highway connections than the sigmoid systems, which is reasonable since ReLUs suffer less from information attenuation than sigmoids.
5.2 Experiments on 275 Hour Data Set
In order to ensure that the 55h results and findings can scale to a significantly larger training set, some selected LSTM and highway networks were built on the full 275h set. The hidden layer size and LSTMP projection size were increased to 1000 and 500, which quadrupled the parameters to better model the full training set. From Table 3, the 2L LSTMP L gave a 3% WERR over 1L LSTMP L. Comparing to S and R, the sigmoid and ReLU highway networks, S and R, had a 4.0% and a 3.7% WERRs respectively by increasing the model depths from 7L to 15L. All of the LSTM and sigmoid/ReLU Highway systems produced similar WERs to their corresponding conventional LSTMP and highway networks while using fewer than 40% of the parameters in the hidden layers. This validates our previous finding on a larger data set that the proposed STU can work as well as the widely used traditional gating units with far fewer parameters. The STU approach is a highly efficient way to perform general gating and information merging that can also be applied to other gated models, such as GRUs, recurrent highway networks, quasi-RNNs, and highway LSTMs etc.
This paper proposed the use of STUs for efficient gating in LSTMs and feedforward highway networks for acoustic modelling. The weight matrices and bias vectors from all units in the same target layer are tied together to save calculations and storage space, and additional linear input/output value scaling factors are associated with the activation functions of each hidden node individually, in order to learn distinct functions for all gating and candidate units. Experiments on both 55h and 275h MGB data sets found that STU-based LSTMs and highway networks produced similar WERs to the corresponding models with traditional gating units, while being several times more efficient. It was also shown that STUs learn reasonable gating functions, by using only a few thousand extra untied parameters in each sublayer for gating and candidate vector generation.
-  Y. Bengio, P. Simard, & P. Frasconi, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, vol. 5, pp. 157–166, 1994.
-  S. Hochreiter & J. Schmidhuber, “Long short-term memory”, Neural Computation, vol. 9, pp. 1735–1780, 1997.
-  J. Chung, C. Gulcehre, K.H. Cho, & Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling”, arXiv.org, 1412.3555, 2014.
-  R.K. Srivastava, K. Greff, & J. Schmidhuber, “Highway networks”, arXiv.org, 1505.00387, 2015.
-  R.K. Srivastava, K. Greff, & J. Schmidhuber, “Training very deep networks”, Advances in NIPS 28, Montreal, 2015.
-  J.G. Zilly, R.K. Srivastava, J. Koutník, & J. Schmidhuber, “Recurrent highway networks”, arXiv.org, 1607.03474, 2016.
-  K. Yao, T. Cohn, K. Vylomova, K. Duh, & C. Dyer, “Depth-gated LSTM”, arXiv.org, 1508.03790, 2015.
X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, & W.-C. Woo,
“Convolutional LSTM network: A machine learning approach for precipitation nowcasting”,Advances in NIPS 28, Montreal, 2015.
-  J. Bradury, S. Merity, C. Xiong, & R. Socher, “Quasi-recurrent neural networks”, Proc. ICLR, Toulon, 2017.
-  A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, & J. Schmidhuber, “A novel connectionist system for unconstrained handwriting recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 855–868, 2009.
-  A. Graves, A.-R. Mohamed, G. Hinton, “Speech recognition with deep recurrent neural networks”, Proc. ICASSP, Vancouver, 2013.
-  H. Sak, A. Senior, & F. Beaufays, “Long short-term memory recurrent neural network architectures for large scale acoustic modeling”, Proc. Interspeech, Singapore, 2014.
-  H. Sak, A. Senior, K. Rao, F. Beaufays, “Fast and accurate recurrent neural network acoustic models for speech recognition”, Proc. Interspeech, Dresden, 2015.
-  Y. Zhang, G. Chen, D. Yu, K. Yao, S. Khudanpur, & J. Glass, “Highway long short-term memory RNNs for distant speech recognition”, Proc. ICASSP, Shanghai, 2016.
-  L. Lu, X. Zhang, & S. Renals, “On training the recurrent neural network encoder-decoder for large vocabulary end-to-end speech recognition”, Proc. ICASSP, Shanghai, 2016.
-  G. Pundak & T.N. Sainath, “Highway-LSTM and recurrent highway networks for speech recognition”, Proc. Interspeech, Stockholm, 2017.
-  L. Lu & S. Renals, “Small-footprint deep neural networks with highway connections for speech recognition”, Proc. Interspeech, San Francisco, 2016.
-  Y. Zhang, W. Chan, & N. Jaitly, “Very deep convolutional networks for end-to-end speech recognition”, Proc. ICASSP, New Orleans, 2017.
-  L. Tao, Y. Zhang, & Y.Artzi, “Training RNNs as fast as CNNs”, arXiv.org, 1709.02755, 2017.
-  C. Zhang & P.C. Woodland, “Parameterised sigmoid and ReLU hidden activation functions for DNN acoustic modelling”, Proc. Interspeech, Dresden, 2015.
-  C. Zhang, Joint Training Methods for Tandem and Hybrid Speech Recognition Systems using Deep Neural Networks, Ph.D. thesis, University of Cambridge, Cambridge, UK, 2017.
-  S.L. Goh & D.P. Mandic “Recurrent neural networks with trainable amplitude of activation functions”, Neural Networks, vol. 16, pp. 1095–1100, 2003.
-  S.M. Siniscalchi, T. Svendsen, F. Sorbello, & C.-H. Lee, “Experimental studies on continuous speech recognition using neural architectures with “adaptive” hidden activation functions”, Proc. ICASSP, Dallas, 2010.
K. He, X. Zhang, S. Ren, & J. Sun,
“Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification”,Proc. ICCV, Santiago, 2015.
Z. Tüske, M. Sundermeyer, R. Schlüter, & H. Ney,
“Integrating Gaussian mixtures into deep neural networks: Softmax layer with hidden variables”,Proc. ICASSP, Brisbane, 2015.
-  S.M. Siniscalchi, J. Li, & C.-H. Lee, “Hermitian polynomial for speaker adaptation of connectionist speech recognition systems”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, pp. 2152–2161, 2013.
Y. Zhao, J. Li, J. Xue, & Y. Gong,
“Investigating online low-footprint speaker adaptation using generalized linear regression and click-through data”,Proc. ICASSP, Brisbane, 2015.
-  P. Swietojanski, J. Li, & S. Renals, “Learning hidden unit contributions for unsupervised acoustic model adaptation”, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, pp. 1450–1463, 2016.
-  C. Zhang & P.C. Woodland, “DNN speaker adaptation using parameterised sigmoid and ReLU hidden activation functions”, Proc. ICASSP, Shanghai, 2016.
-  http://www.mgb-challenge.org
-  P. Bell, M.J.F. Gales, T. Hain, J. Kilgour, P. Lanchantin, X. Liu, A. McParland, S. Renals, O. Saz, M. Wester, & P.C. Woodland, “The MGB challenge: Evaluating multi-genre broadcast media transcription”, Proc. ASRU, Scottsdale, 2015.
-  P. Lanchantin, M.J.F. Gales, P. Karanasou, X. Liu, Y. Qian, L. Wang, P.C. Woodland, & C. Zhang, “Selection of Multi-Genre Broadcast data for the training of automatic speech recognition systems”, Proc. Interspeech, San Francisco, 2016.
-  K. Richmond, R. Clark, & S. Fitt, “On generating Combilex pronunciations via morphological analysis”, Proc. Interspeech, Makuhari, 2010.
-  L. Mangu, E. Brill, & A. Stolcke, “Finding consensus in speech recognition: Word error minimization and other applications of confusion networks”, Computer Speech & Language, vol. 14, pp. 373–400, 2000.
G. Evermann & P. Woodland,
“Large vocabulary decoding and confidence estimation using word posterior probabilities”,Proc. ICASSP, Istanbul, 2000.
-  P.C. Woodland, X. Liu, Y. Qian, C. Zhang, M.J.F. Gales, P. Karanasou, P. Lanchantin, & L. Wang, “Cambridge University transcription systems for the Multi-Genre Broadcast challenge”, Proc. ASRU, Scottsdale, 2015.
-  S. Young, G. Evermann, M. Gales, T. Hain, D. Kershaw, X. Liu, G. Moore, J. Odell, D. Ollason, D. Povey, A. Ragni, V. Valtchev, P. Woodland, & C. Zhang, The HTK Book (for HTK version 3.5), Cambridge University Engineering Department, 2015.
-  C. Zhang & P.C. Woodland, “A general artificial neural network extension for HTK”, Proc. Interspeech, Dresden, 2015.
-  H.A. Bourlard & N. Morgan, “Connectionist Speech Recognition: A Hybrid Approach”, Kluwer Academic Publishers, Norwell, MA, USA 1993.
-  G.E. Dahl, D. Yu, L. Deng, & A. Acero, ‘Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, pp. 30–42, 2012.
-  G. Saon, H. Soltau, A. Emami, & M. Picheny, “Unfolded recurrent neural networks for speech recognition”, Proc. Interspeech, Singapore, 2014.
-  C. Zhang & P.C. Woodland, “High order recurrent neural networks for acoustic modelling”, Proc. ICASSP, Calgary, 2018.
-  F.L. Kreyssig, C. Zhang, & P.C. Woodland, “Improved TDNNs using deep kernels and frequency dependent Grid-RNNs”, Proc. ICASSP, Calgary, 2018.