1 Introduction
During the last decade, deep neural networks (DNN) have encountered a wide success in automatic speech recognition. Many architectures such as recurrent (RNN) sak2014long ; hinton2012deep ; abdel2012applying ; mirco2017timit ; greff2017lstm , timedelay (TDNN) waibel1990phoneme ; peddinti2015time
, or convolutional neural networks (CNN)
zhang2017towardshave been proposed and achieved better performances than traditional hidden Markov models (HMM) combined with gaussian mixtures models (GMM) in different speech recognition tasks. However, despite such evolution of models and paradigms, the acoustic feature representation remains almost the same. The acoustic signal is commonly split into timeframes, for which Melfilter banks energies, or Mel frequency scaled cepstral coefficients (MFCC)
davis1990comparison are extracted, alongside with the first and second order derivatives. In fact, timeframes are characterized by dimensional features that are related by representing three different views of the same basic element. Consequently an efficient neural networksbased model has to learn both external dependencies between timeframes, and internal relations within the features. Traditional realvalued architectures deal with both dependencies at the same level, due to the lack of a dedicated mechanism to learn the internal and external relations separately.Quaternions are hypercomplex numbers that contain a real and three separate imaginary components, fitting perfectly to three and four dimensional feature vectors, such as for image processing and robot kinematics
sangwine1996fourier ; pei1999color ; aspragathos1998comparative . The idea of bundling groups of numbers into separate entities is also exploited by the recent capsule network hinton2017capsule . Contrary to traditional homogeneous representations, capsule and quaternion neural networks bundle sets of features together. Thereby, quaternion numbers allow neural models to code latent interdependencies between groups of input features during the learning process with up to four times less parameters than realvalued neural networks, by taking advantage of the Hamilton productas the equivalent of the dot product between quaternions. Early applications of quaternionvalued backpropagation algorithms
arena1994neural ; arena1997multilayer have efficiently solved quaternion functions approximation tasks. More recently, neural networks of complex and hypercomplex numbers have received an increasing attention hirose2012generalization ; tygert2016mathematical ; danihelka2016associative ; wisdom2016full , and some efforts have shown promising results in different applications. In particular, a deep quaternion network parcollet2016quaternion ; parcollet2017deep ; parcollet2017quaternion , and a deep quaternion convolutional network chase2017quat ; parcollet2018quaternion have been successfully employed for challenging tasks such as images and language processing.Contributions: This paper proposes to evaluate previously investigated quaternionvalued models in two different realistic conditions of speech recognition, to see whether the quaternion encoding of the signal, alongside with the quaternion algebra and the important parameter reduction help to better capture the acoustic signal nature, leading to a more expressive representation of the information. Based on the TIMIT garofolo1993darpa
phoneme recognition task, a quaternion convolutional neural network (QCNN) is compared to a realvalued CNN in a endtoend framework, and a quaternion recurrent neural network (QRNN) is compared to an RNN within a more traditional HMMbased system. In the endtoend approach, the experiments show that the QCNN outperforms the CNN with a phoneme error rate (PER) of
against the achieved for CNNs. Moreover, the QRNN outperforms the RNN with a PER of against for the RNN. Furthermore, such results are observed with a maximum reduction factor of the number of neural network parameters of times.2 Motivations
A major challenge of current machine learning models is to obtain efficient representations of relevant information for solving a specific task. Consequently, a good model has to efficiently code both the relations that occur at the feature level, such as between the Mel filter energies, the first, and second order derivatives values of a single timeframe, and at a global level, such as a phonemes or words described by a group of timeframes. Moreover, to avoid overfitting, better generalize, and to be more efficient, such models also have to be as small as possible. Nonetheless, realvalued neural networks usually require a huge set of parameters to wellperform on speech recognition tasks, and hardly code internal dependencies within the features, since they are considered at the same level as global dependencies during the learning. In the following, we detail the motivations to employ quaternionvalued neural networks instead of realvalued ones to code inter and intra features dependencies with less parameters.
First, a better representation of multidimensional data has to be explored to naturally capture internal relations within the input features. For example, an efficient way to represent the information composing an acoustic signal sequence is to consider each timeframe as being a whole entity of three strongly related elements, instead of a group of unidimensional elements that could be related to each others, as in traditional realvalued neural networks. Indeed, with a realvalued NN, the latent relations between the Mel filter banks energies, and the first and second order derivatives of a given timeframe are hardly coded in the latent space since the weight has to find out these relations among all the timeframes composing the sequence. Quaternions are fourth dimensional entities and allow one to build and process elements made of up to four elements, mitigating the above described problem. Indeed, the quaternion algebra and more precisely the Hamilton product allows quaternion neural network to capture these internal latent relations within the features of a quaternion. It has been shown that QNNs are able to restore the spatial relations within D coordinates matsui2004quaternion , and within color pixels isokawa2003quaternion , while realvalued NNs failed. In fact, the quaternionweight components are shared through multiple quaternion input parts during the Hamilton product , creating relations within the elements. Indeed, Figure 1 shows that the multiple weights required to code latent relations within a feature are considered at the same level as for learning global relations between different features, while the quaternion weight codes these internal relations within a unique quaternion during the Hamilton product (right).
Second, quaternion neural networks make it possible to deal with the same signal dimension than realvalued NN, but with four times less neural parameters. Indeed, a number quaternion weight linking two 4number quaternion units only has degrees of freedom, whereas a standard neural net parametrization have , i.e., a 4fold saving in memory. Therefore, the natural multidimensional representation of quaternions alongside with their ability to drastically reduce the number of parameters indicate that hypercomplex numbers are a better fit than real numbers to create more efficient models in multidimensional spaces such as speech recognition.
Indeed, modern automatic speech recognition systems usually employ input sequences composed of multidimensional acoustic features, such as log Mel features, that are often enriched with their first, second and third time derivatives davis1990comparison ; furui1986speaker , to integrate contextual information. In standard NNs, static features are simply concatenated with their derivatives to form a large input vector, without effectively considering that signal derivatives represent different views of the same input. Nonetheless, it is crucial to consider that these three descriptors represent a special state of a timeframe, and are thus correlated. Following the above motivations and the results observed on previous works about quaternion neural networks, we hypothesize that for acoustic data, quaternion NNs naturally provide a more suitable representation of the input sequence, since these multiple views can be directly embedded in the multiple dimensions space of the quaternion, leading to smaller and more accurate models.
3 Quaternion Neural Networks
Realvalued neural networks architectures are extended to the quaternion domain to benefit from its capacities. Therefore, this section proposes to introduce the quaternion algebra (Section 3.1), the quaternion internal representation (section 3.2), a quaternion convolutional neural networks (QCNN, Section 3.3) and a quaternion recurrent neural network (QRNN, Section 3.4).
3.1 Quaternion Algebra
The quaternion algebra defines operations between quaternion numbers. A quaternion Q is an extension of a complex number defined in a four dimensional space as:
(1) 
where , , , and are real numbers, and , i, j, and k are the quaternion unit basis. In a quaternion, is the real part, while with is the imaginary part, or the vector part. Such a definition can be used to describe spatial rotations. The information embedded in the quaterion can be summarized into the following matrix of real numbers, that turns out to be more suitable for computations:
(2) 
The conjugate of is defined as:
(3) 
Then, a normalized or unit quaternion is expressed as:
(4) 
Finally, the Hamilton product between two quaternions and is computed as follows:
(5) 
The Hamilton product is used in QRNNs to perform transformations of vectors representing quaternions, as well as scaling and interpolation between two rotations following a geodesic over a sphere in the
space as shown in minemoto2017feed .3.2 Quaternion internal representation
In a quaternion layer, all parameters are quaternions, including inputs, outputs, weights, and biases. The quaternion algebra is ensured by manipulating matrices of real numbers parcollet2018quaternion . Consequently, for each input vector of size , output vector of size , dimensions are split into four parts: the first one equals to , the second is , the third one equals to , and the last one to to compose a quaternion . The inference process is based in the realvalued space on the dot product between input features and weights matrices. In any quaternionvalued NN, this operation is replaced with the Hamilton product (eq. 5) with quaternionvalued matrices (i.e. each entry in the weight matrix is a quaternion).
3.3 Quaternion convolutional neural networks
Convolutional neural networks (CNN) lecun1999object have been proposed to capture the highlevel relations that occur between neighbours features, such as shape and edges on an image. However, internal dependencies within the features are considered at the same level than these highlevel relations by realvalued CNNs, and it is thus not guaranteed that they are wellcaptured. In this extend, a quaternion convolutional neural network (QCNN) have been proposed by chase2017quat ; parcollet2018quaternion ^{1}^{1}1https://github.com/OrkisResearch/PytorchQuaternionNeuralNetworks. Let and , be the quaternion output and the preactivation quaternion output at layer and at the indexes of the new feature map, and the quaternionvalued weight filter map of size . A formal definition of the convolution process is:
(6) 
with
(7) 
where is a quaternion split activation function xu2017learning defined as:
(8) 
with
corresponding to any standard activation function. The output layer of a quaternion neural network is commonly either quaternionvalued such as for quaternion approximation
arena1997multilayer , or realvalued to obtains a posterior distribution based on a softmax function following the split approach of Eq. 8. Indeed, target classes are often expressed as real numbers. Finally, the full derivation of the backpropagation algorithm for quaternion valued neural networks can be found in nitta1995quaternary .3.4 Quaternion recurrent neural networks
Despite the fact that CNNs are efficient to detect and learn patterns in an input volume, recurrent neural networks (RNN) are more adapted to represent sequential data. Indeed, recurrent neural networks obtained stateoftheart results on many tasks related to speech recognition ravanelli2018light ; graves2013speech . Therefore, a quaternary version of the RNN called QRNN have been proposed by parcollet2018QRNN ^{2}^{2}2https://github.com/OrkisResearch/PytorchQuaternionNeuralNetworks. Let us define a QRNN with an hidden state composed of neurons. Then, let , , and be the hidden to hidden, input to hidden, and hidden to output weight matrices respectively, and be the bias at neuron and layer . Therefore, and with the same parameters as for the QCNN, the hidden state of the neuron at timestep and layer can be computed as:
(9) 
with any split activation function. Finally, the output of the neuron is computed following:
(10) 
with any split activation function. The full derivation of the backpropagation trough time of the QRNN can be found in parcollet2018QRNN .
4 Experiments on TIMIT
Quaternionvalued models are compared to their realvalued equivalents on two different benchmarks with the TIMIT phoneme recognition task garofolo1993darpa . First, an endtoend approach is investigated based on QCNNs compared to CNNs in Section 4.2. Then, a more traditional and powerful method based on QRNNs compared to RNNs alongside with HMM decoding is explored in Section 4.3. During both experiments, the training process is performed on the standard sentences uttered by speakers, while testing is conducted on sentences uttered by speakers of the TIMIT dataset. A validation set composed of sentences uttered by speakers is used for hyperparameter tuning. All the results are from an average of three runs (3folds) to alleviate any variation due to the random initialization.
4.1 Acoustic quaternions
Endtoend and HMM based experiments share the same quaternion input vector extracted from the acoustic signal. The raw audio is first transformed into dimensional log Melfilterbank coefficients using the pytorchkaldi^{3}^{3}3https://github.com/mravanelli/pytorchkaldi toolkit and the Kaldi s5 recipes Povey2011ASRU . Then, the first, second, and third order derivatives are extracted. Consequently, an acoustic quaternion associated with a frequency and a timeframe is formed as:
(11) 
represents multiple views of a frequency at time frame , consisting of the energy in the filter band at frequency , its first time derivative describing a slope view, its second time derivative describing a concavity view, and the third derivative describing the rate of change of the second derivative. Quaternions are used to learn the spatial relations that exist between the different views that characterize a same frequency. Thus, the quaternion input vector length is .
4.2 Toward endtoend phonemes recognition
Endtoend systems are at the heart of modern researches in the speech recognition domain zhang2017towards . The task is particularly difficult due to the differences that exists between the raw or preprocessed acoustic signal used as input features, and the word or phonemes expected at the output. Indeed, both features are not defined at the same timescale, and an automatic alignment method has to be defined. This section proposes to evaluate the QCNN compared to traditional CNN in an endtoend model based on the connectionist temporal classification (CTC) method, to see whether the quaternion encoding of the signal, alongside with the quaternion algebra, help to better capture the acoustic signal nature and therefore better generalize.
4.2.1 Connectionist Temporal Classification
In the acoustic modeling part of ASR systems, the task of sequencetosequence mapping from an input acoustic signal to a sequence of symbols is complex due to:

and could be in arbitrary length.

The alignment between and is unknown in most cases.
Specially, is usually shorter than in terms of phoneme symbols. To alleviate these problems, connectionist temporal classification (CTC) has been proposed graves2006connectionist
. First, a softmax is applied at each timestep, or frame, providing a probability of emitting each symbol
at that timestep. This probability results in a symbol sequences representation , with in the latent space . A blank symbolis introduced as an extra label to allow the classifier to deal with the unknown alignment. Then,
is transformed to the final output sequence with a manytoone function defined as follows:(12) 
Consequently, the output sequence is a summation over the probability of all possible alignments between and after applying the function . Accordingly to graves2006connectionist
the parameters of the models are learned based on the cross entropy loss function:
(13) 
During the inference, a best path decoding algorithm is performed. Therefore, the latent sequence with the highest probability is obtained by performing argmax of the softmax output at each timestep. The final sequence is obtained by applying the function to the latent sequence.
4.2.2 Model Architectures
A first D convolutional layer is followed by a maxpooling layer along the frequency axis to reduce the internal dimension. Then, D convolutional layers are included, together with three dense layers of sizes and respectively for real and quaternionvalued models. Indeed, the output of a dense quaternionvalued layer has nodes and is four times larger than the number of units. The filter size is rectangular
, and a padding is applied to keep the sequence and signal sizes unaltered. The number of feature maps varies from
to for the realvalued models and from to for quaternionvalued models. Indeed, the number of output feature maps is four times larger in the QCNN due to the quaternion convolution, meaning quaternionvalued feature maps (FM) correspond to realvalued ones. Therefore, for a fair comparison, the number of feature maps is represented in the realvalued space (e.g., a number of realvalued FM of corresponds to quaternionvalued neurons). The PReLU activation function is employed for both models he2015delving . A dropout of and a regularization ofare used across all the layers, except the input and output ones. CNNs and QCNNs are trained with the RMSPROP learning rate optimizer and vanilla hyperparameters
kingma2014adam during epochs. The learning rate starts at , and is decayed by a factor of every time the results observed on the validation set do not improve. Quaternion parameters including weights and biases are initialized following the adapted quaternion initialization scheme provided in parcollet2018QRNN . Finally, the standard CTC loss function defined in graves2006connectionist and implemented in chollet2015keras is applied.4.2.3 Results and discussions
Endtoend results of QCNN and CNN are reported in Table 1. In agreement with our hypothesis, one may notice an important difference in the amount of learning parameters between real and quaternion valued CNNs. An explanation comes from the quaternion algebra. A dense layer with input values and hidden units contains M parameters, while the quaternion equivalent needs M parameters to deal with the same signal dimension. Such reduction in the number of parameters have multiple positive impact on the model. First, a smaller memory footprint for embedded and limited devices. Second, and as demonstrated in Table 1, a better generalization ability leading to better performances. Indeed, the best PER observed in realistic conditions (w.r.t to the development PER) is for the QCNN compared to for the CNN, giving an absolute improvement of with QCNN. Such results are obtained with M parameters for CNN, and only for QCNN, representing a reduction factor of x of the number of parameters.
Models  FM  Dev.  Test  Params 

CNN  32  22.0  23.1  3.4M 
64  19.6  20.7  5.4M  
128  19.6  20.8  11.5M  
256  19.0  20.6  32.1M  
QCNN  32  22.3  23.3  0.9M 
64  19.9  20.5  1.4M  
128  18.9  19.9  2.9M  
256  18.2  19.5  8.1M 
It is worth noticing that with much fewer learning parameters for a given architecture, the QCNN always performs better than the realvalued one. Consequently, the quaternionvalued convolutional approach offers an alternative to traditional realvalued endtoend models, that is more efficient, and more accurate. However, due to an higher number of computations involved during the Hamilton product and to the lack of proper engineered implementations, the QCNN is one time slower than the CNN to train. Nonetheless, such behavior can be alleviated with a dedicated implementation in CUDA of the Hamilton product . Indeed, this operation is a matrix product and can thus benefit from the parallel computation of GPUs. In fact, a proper implementation of the Hamilton product will leads to a higher and more efficient usage of GPUs.
4.3 HMMbased phonemes recognition
A conventional ASR pipeline based on a HMM decoding process alongside with recurrent neural networks is also investigated to reach stateoftheart results on the TIMIT task. While input features remain the same as for endtoend experiments, RNNs and QRNNs are trained to predict the HMM states that are then decoded in the standard Kaldi recipes Povey2011ASRU . As hypothetized for the endtoend solution, QRNN models are expected to better generalize than RNNs due to their specific algebra.
4.3.1 Model Architectures
RNN and QRNN models are compared on a fixed number of layers and by varying the number of neurons from to , and to for the RNN and QRNN respectively. Indeed, as demonstrated on the previous experiments the number of hidden neurons in the quaternion and real spaces do not handle the same amount of realnumber values. Tanh activations are used across all the layers except for the output layer that is based on a softmax function. Models are optimized with RMSPROP kingma2014adam with vanilla hyperparameters and an initial learning rate of . The learning rate is progressively annealed using an halving factor of that is applied when no performance improvement on the validation set is observed. The models are trained during epochs. A dropout rate of is applied over all the hidden layers srivastava2014dropout except the output. The negative loglikelihood loss function is used as an objective function. As for QCNNs, quaternion parameters are initialized based on parcollet2018QRNN . Finally, decoding is based on Kaldi Povey2011ASRU and weighted finite state transducers (WFST) MOHRI2002mohri
that integrate acoustic, lexicon and language model probabilities into a single HMMbased search graph.
4.3.2 Results and discussions
The results of both QRNNs and RNNs alongside with an HMM decoding phase are presented in Table 2. A best testing PER of is reported for QRNN compared to for RNN, with respect to the best development PER. Such results are obtained with M, and M parameters for the QRNN and RNN respectively, with an equal hidden dimension of , leading to a reduction of the number of parameters by a factor of times. As for previous experiments the QRNN always outperform equivalents architectures in term of PER, with significantly less learning parameters. It is also important to notice that both models tend to overfit with larger architectures. However, such phenomenom is lowered by the small number of free parameters of the QRNN. Indeed, a QRNN whose hidden dimension is only has M parameters compared to M for an equivalently sized RNN, leading to less degrees of freedom, and therefore less overfitting.
Models  Hidden dim.  Dev.  Test  Params 

RNN  256  22.4  23.4  1M 
512  19.6  20.4  2.8M  
1,024  17.9  19.0  9.4M  
2,048  20.0  20.7  33.4M  
QRNN  256  23.6  23.9  0.6M 
512  19.2  20.1  1.4M  
1,024  17.4  18.5  3.8M  
2,048  17.5  18.7  11.2M 
The reported results show that the QRNN is a better framework for ASR systems than realvalued RNN when dealing with conventional multidimensional acoustic features. Indeed, the QRNN performed better and with less parameters, leading to a more efficient representation of the information.
5 Conclusion
Summary. This paper proposes to investigate novel quaternionvalued architectures in two different conditions of speech recognition on the TIMIT phoneme recognition tasks. The experiments show that quaternion approaches always outperform realvalued equivalents in both benchmarks, with a maximum reduction factor of the number of learning parameters of times. It has been shown that the appropriate multidimensional quaternion representation of acoustic features, alongside with the Hamilton product, help QCNN and QRNN to welllearn both internal and external relations that exists within the features, leading to a better generalization capability, and to a more efficient representation of the relevant information through significantly less free parameters than traditional realvalued neural networks.
Future Work. Future investigation will be to develop multiview features that contribute to decrease ambiguities in representing phonemes in the quaternion space. In this extend, a recent approach based on a quaternion Fourrier transform to create quaternionvalued signal has to be investigated.
References
 [1] Ossama AbdelHamid, Abdelrahman Mohamed, Hui Jiang, and Gerald Penn. Applying convolutional neural networks concepts to hybrid nnhmm model for speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 4277–4280. IEEE, 2012.
 [2] Paolo Arena, Luigi Fortuna, Giovanni Muscato, and Maria Gabriella Xibilia. Multilayer perceptrons to approximate quaternion valued functions. Neural Networks, 10(2):335–342, 1997.
 [3] Paolo Arena, Luigi Fortuna, Luigi Occhipinti, and Maria Gabriella Xibilia. Neural networks for quaternionvalued function approximation. In Circuits and Systems, ISCAS’94., IEEE International Symposium on, volume 6, pages 307–310. IEEE, 1994.
 [4] Nicholas A Aspragathos and John K Dimitros. A comparative study of three methods for robot kinematics. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 28(2):135–145, 1998.
 [5] Anthony Maida Chase Gaudet. Deep quaternion networks. arXiv preprint arXiv:1712.04604v2, 2017.
 [6] François Chollet et al. Keras. https://github.com/kerasteam/keras, 2015.
 [7] Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, and Alex Graves. Associative long shortterm memory. arXiv preprint arXiv:1602.03032, 2016.
 [8] Steven B Davis and Paul Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. In Readings in speech recognition, pages 65–74. Elsevier, 1990.
 [9] Sadaoki Furui. Speakerindependent isolated word recognition based on emphasized spectral dynamics. In Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP’86., volume 11, pages 1991–1994. IEEE, 1986.
 [10] John S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, and David S Pallett. Darpa timit acousticphonetic continous speech corpus cdrom. nist speech disc 11.1. NASA STI/Recon technical report n, 93, 1993.
 [11] Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369–376. ACM, 2006.
 [12] Alex Graves, Abdelrahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645–6649. IEEE, 2013.
 [13] Klaus Greff, Rupesh K Srivastava, Jan Koutník, Bas R Steunebrink, and Jürgen Schmidhuber. Lstm: A search space odyssey. IEEE transactions on neural networks and learning systems, 28(10):2222–2232, 2017.

[14]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification.
InProceedings of the IEEE international conference on computer vision
, pages 1026–1034, 2015.  [15] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012.
 [16] Akira Hirose and Shotaro Yoshida. Generalization characteristics of complexvalued feedforward neural networks in relation to signal coherence. IEEE Transactions on Neural Networks and learning systems, 23(4):541–551, 2012.
 [17] Teijiro Isokawa, Tomoaki Kusakabe, Nobuyuki Matsui, and Ferdinand Peper. Quaternion neural network and its application. In International Conference on KnowledgeBased and Intelligent Information and Engineering Systems, pages 318–324. Springer, 2003.
 [18] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 [19] Yann LeCun, Patrick Haffner, Léon Bottou, and Yoshua Bengio. Object recognition with gradientbased learning. In Shape, contour and grouping in computer vision, pages 319–345. Springer, 1999.
 [20] Nobuyuki Matsui, Teijiro Isokawa, Hiromi Kusamichi, Ferdinand Peper, and Haruhiko Nishimura. Quaternion neural network with geometrical operators. Journal of Intelligent & Fuzzy Systems, 15(3, 4):149–164, 2004.
 [21] Toshifumi Minemoto, Teijiro Isokawa, Haruhiko Nishimura, and Nobuyuki Matsui. Feed forward neural network with random quaternionic neurons. Signal Processing, 136:59–68, 2017.
 [22] Mehryar Mohri, Fernando Pereira, and Michael Riley. Weighted finitestate transducers in speech recognition. Computer Speech and Language, 16(1):69 – 88, 2002.
 [23] Tohru Nitta. A quaternary version of the backpropagation algorithm. In Neural Networks, 1995. Proceedings., IEEE International Conference on, volume 5, pages 2753–2756. IEEE, 1995.
 [24] Titouan Parcollet, Mohamed Morchid, PierreMichel Bousquet, Richard Dufour, Georges Linarès, and Renato De Mori. Quaternion neural networks for spoken language understanding. In Spoken Language Technology Workshop (SLT), 2016 IEEE, pages 362–368. IEEE, 2016.
 [25] Titouan Parcollet, Mohamed Morchid, and Georges Linares. Deep quaternion neural networks for spoken language understanding. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 504–511. IEEE, 2017.
 [26] Titouan Parcollet, Mirco Ravanelli, Mohamed Morchid, Georges Linarès, Chiheb Trabelsi, Renato De Mori, and Yoshua Bengio. Quaternion recurrent neural networks, 2018.
 [27] Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linarès, Renato De Mori, and Yoshua Bengio. Quaternion convolutional neural networks for endtoend automatic speech recognition. arXiv preprint arXiv:1806.07789, 2018.
 [28] Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. A time delay neural network architecture for efficient modeling of long temporal contexts. In Sixteenth Annual Conference of the International Speech Communication Association, 2015.

[29]
SooChang Pei and ChingMin Cheng.
Color image processing by using binary quaternionmomentpreserving thresholding technique.
IEEE Transactions on Image Processing, 8(5):614–628, 1999.  [30] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. The kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society, December 2011. IEEE Catalog No.: CFP11SRWUSB, 2011.

[31]
Mirco Ravanelli, Philemon Brakel, Maurizio Omologo, and Yoshua Bengio.
Improving speech recognition by revising gated recurrent units.
Proc. Interspeech 2017, 2017.  [32] Mirco Ravanelli, Philemon Brakel, Maurizio Omologo, and Yoshua Bengio. Light gated recurrent units for speech recognition. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(2):92–102, 2018.
 [33] Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. arXiv preprint arXiv:1710.09829v2, 2017.
 [34] Haşim Sak, Andrew Senior, and Françoise Beaufays. Long shortterm memory recurrent neural network architectures for large scale acoustic modeling. In Fifteenth annual conference of the international speech communication association, 2014.
 [35] Stephen John Sangwine. Fourier transforms of colour images using quaternion or hypercomplex, numbers. Electronics letters, 32(21):1979–1980, 1996.
 [36] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
 [37] Parcollet Titouan, Mohamed Morchid, and Georges Linares. Quaternion denoising encoderdecoder for theme identification of telephone conversations. Proc. Interspeech 2017, pages 3325–3328, 2017.
 [38] Mark Tygert, Joan Bruna, Soumith Chintala, Yann LeCun, Serkan Piantino, and Arthur Szlam. A mathematical motivation for complexvalued convolutional networks. Neural computation, 28(5):815–825, 2016.
 [39] Alexander Waibel, Toshiyuki Hanazawa, Geoffrey Hinton, Kiyohiro Shikano, and Kevin J Lang. Phoneme recognition using timedelay neural networks. In Readings in speech recognition, pages 393–404. Elsevier, 1990.
 [40] Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Fullcapacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pages 4880–4888, 2016.
 [41] D Xu, L Zhang, and H Zhang. Learning alogrithms in quaternion neural networks using ghr calculus. Neural Network World, 27(3):271, 2017.
 [42] Ying Zhang, Mohammad Pezeshki, Philémon Brakel, Saizheng Zhang, Cesar Laurent Yoshua Bengio, and Aaron Courville. Towards endtoend speech recognition with deep convolutional neural networks. arXiv preprint arXiv:1701.02720, 2017.
Comments
There are no comments yet.