Deep neural networks (DNNs) are the state-of-the-art in supervised learning in many areas such as image classification. Using the power of convolutional layers, DNNs are able to learn novel features that are often superior to the engineered features. The characteristics of the representations learned in DNNs in an end-to-end fashion are influenced by different factors such as the network architecture, training data 
and optimization algorithms. One of the drawbacks of such models is that they require enormous amount of training data which is tied to their success. Although in Imagenet competition provided with over 1 million data samples, DNNs are the state-of-the-art; in challenges such as DCASE-2016  which only slightly more than 1 thousand recordings are provided, other methods such as factor analysis  and matrix factorization  outperform DNNs. This is also represented in the challenge results  as the DNNs with less parameters  performed better than the models with deeper architectures 
. To influence the characteristics of the learned representation and overcome shortcomings of end-to-end deep learning, inspired by classical machine learning algorithms many have tried to integrate such methods into a deep learning framework in terms of a layer or an objective function. Examples of such methods are Deep Linear Discriminant Analysis (DeepLDA) which learns linearly separable latent representations on top of a DNN, and Deep Canonical Correlation Analysis (DCCA)  which is used to produce correlated representations of multi-modal acoustic and articulatory speech data. These methods influence the learned representation to have characteristics similar to their respective conventional counterparts. Conventional Within-Class Covariance Normalization (WCCN) 
is a powerful method used to reduce the covariance of classes by projecting the observations into a linear sub-space where the within-class variability is reduced. WCCN is used with different features such as i-vectors
and Maximum-Likelihood Linear Regression (MLLR) for speaker, language and music artist recognition 
, Over-Complete Local Binary Patterns (OCLBP) for face recognition and Gaussianized Vector Representation for video and image modeling . In this paper, we reformulate the classical WCCN as a Deep Neural Network (DNN) compatible version by proposing the Deep Within-Class Covariance Analysis (DWCCA) layer. Our DWCCA layer can be incorporated at arbitrary positions in a DNN architecture which enables us to utilize the beneficial properties of WCCN directly within a DNN, and still allows for end-to-end training with Stochastic Gradient Descent (SGD). We provide empirical results to demonstrate that by adding DWCCA layer to a DNN – in our case, a VGG-style network – we can influence the covariance of the learned representation which results in smaller per-class covariance and overall within-class covariance. These properties lead to achieve similar or superior performances on the task of Acoustic Scene Classification (ASC) compared to a DNN without DWCCA. We further provide deeper insights by analyzing the covariance of the learned representations via the network and visualizing a 2D version of it.
2 Deep Within-Class Covariance Analysis
We start this section by introducing a common notation which will be used throughout the paper. Based on this notation we first describe Within-Class Covariance Normalization (WCCN) and show how we cast it as a deep learning compatible version.
2.1 Conventional Within-Class Covariance Normalization
Let denote a set of -dimensional observations (feature vectors) belonging to different classes
. The observations are in the present case either hand crafted features (e.g. i-vectors) or any intermediate hidden representation of a deep neural network.
WCCN is a linear projection that provides an effective compensation for the within-class variability and has proven to be effectively used in combination with different scoring functions [17, 18].The WCCN-projection scales the feature space in the opposite direction of its within-class covariance matrix which has the advantage that finding decision boundaries on the WCCN-projected data becomes easier . The within-class covariance
is estimated as:
where is the mean feature vector of class and are the samples belonging to this class. is the number of observations of each class in the training set. We use the inverse of matrix to normalize the direction of the projected feature vectors. The WCCN projection matrix can be estimated using the Cholesky decomposition as .
2.2 Deep Within-Class Covariance Analysis
Based on the definitions above we propose Deep WCCA,a DNN compatible formulation of WCCN. The parameters of our networks are optimized with ADAM , a variation of Stochastic Gradient Descent (SGD) with faster convergence, and mini-batches of size . This optimization strategy implies that each parameter update is computed only on a small subset of the entire train data set. The deterministic version of WCCN described above is usually estimated on the entire train (development) dataset which is by the definition of SGD not available in the present case. Another reason why computing the projection matrix on the entire dataset might not be feasible is that the train dataset is too large and estimating on the entire data might just not be feasible . In the following we propose the DWCCA Layer which helps to circumvent these problems.
Instead of computing the within class covariance matrices on the entire training set we provide an estimate on the observations of the respective mini-batches. Given this estimates we compute the corresponding mini-batch projection matrix and use it to maintain a moving average projection matrix as
. In addition, the moving average is applied for the means of each class and the within-class covariance to provide better estimation of these parameters over the training set. This is done similarly when computing the mean and standard deviation for normalizing the activations in batch normalization. The hyper parameter controls the influence of the data in the current batch on the final DWCCA projection matrix. The output of this processing step is the DWCCA projected data of the respective mini-batch. The SWCCN Layer can be seen as a special dense layer with a predefined weight matrix, the projection matrix
with the difference that the parameters are computed via the activations of the previous layer, and not learned via SGD. The proposed covariance normalization is applied directly during optimization and implemented as a layer within the network we have to establish gradient flow. This is required for training our networks with back-propagation. We implement the SWCCN Layer using the automatic differentiation framework Theano which already provides the derivatives of matrix inverses and the Cholesky decomposition. We refer the interested reader to  for details on this derivative.
|Max-Pooling + Drop-Out()|
|Max-Pooling + Drop-Out()|
|Max-Pooling + Drop-Out()|
|DWCCA (if applied)|
Model Specifications. BN: Batch Normalization, ReLu: Rectified Linear Activation Function, CCE: Categorical Cross Entropy. For training a constant batch size of 225 (15 examples of each class) samples is used.
To evaluate the performance of different methods, we use TUT database for ASC and sound event detection (TUT16) . We use both development and evaluation sets for performance comparison using accuracy. On the development set, we use a four-fold CV provided with the dataset. On the evaluation set, we train on the development set and test on the evaluation set.
3.2 Baseline Systems
Our first baseline is a VGG-Style CNN  which uses audio spectrograms of
. Our second baseline is a gated recurrent neural network optimized with DeepLDA. This provides a good comparison with DeepLDA as it is a deep learning compatible version of conventional LDA. Our last baseline is a VGG-style network with the exact same architecture as in . We will call this baseline VGG-baseline which is slightly modified from  as follows to be suitable for our DWCCA experiments. First, we reduce the spectrogram dimensionality from to
using mel-spectrograms instead of log spectrograms. Second, to introduce higher variance, we use longer durations (frames instead of ). Third, we use stratified training with mini-batches of with examples from each class. These changes are necessary since DWCCA requires samples of all the classes for within-class covariance computation. This results in larger batch sizes than usual which is compensated with lower feature dimensionality so that all can fit within our Titan X GPU memory. Instead of SGD with momentom, we use ADAM with starting learning rate of and we do not use l2-norm penalty as it is reported in  that it does not improve generalization. We use the same learning rate schedule as in .
For DWCCA experiments, our setup is identical to VGG-baseline with DWCCA layer added before the softmax. The is also set to as it is the parameter used in batch-normalization. In  authors suggest that eigenvalues of the covariance of the neural network activations can be used to investigate the dynamics of the learning in NNs. In this paper, we use eigenvalues of the covariance of final output activation of our network to study the covariance of the classes in the predicted score. In Figure 1.a eigenvalues of the covariance of the predictions of the network are provided for the cases with and without DWCCA layer. As can be clearly seen, applying DWCCA provides flatter eigen values which suggests that the information in the predictions in each dimension have less variance among different dimensions compared to the case without DWCCA. Also as explained in , having the ratio of minimized is beneficial for the network learning, where shows eigenvalues of the covariance of the activations. Figure 1.b shows a similar behavior but only for the scores of City Center class which had better performance with DWCCA. Comparing Figure 1.c and d reveals that in a PCA-projected 2D representation it seems classes appear with smaller within-class variability as they shrieked into a line or gathered in one area when DWCCA is applied. In the results reported in Tables 2, on the validation set DWCCA performs significantly better than  and VGG-baseline. On average,  with average accuracy performs better than VGG-baseline with accuracy. This can be explained with the fact that the features used in VGG-baseline are on lower dimensionality compared to . By applying DWCCA, the performance improves to by percentage point improvement. Comparing the performance of evaluation set shows that all three methods (  and VGG-baseline and DWCCA) achieve similar performances. Comparing DeepLDA and DWCCA, reveals DWCCA outperforms DeepLDA on the evaluation set. The results of the development dataset were not reported in the paper.
The class-wise performances are provided in Table 3
. As can be seen, DWCCA changed the performance of the network and improved the poor performances on 6 out of 15 classes and stayed the same for 2 of the classes. As the network performs differently by adding DWCCA, we late-fused the probabilities of the two networks using linear logistic regression and reported the results underFused. Looking at the results in both tables shows that the two networks have complementary information and the fusion outperforms all the baselines and improves the performance by more that percentage point.
We presented the DWCCA layer, a DNN compatible version of the classic WCCN which is used to normalize the within-class covariance of the hidden activations. In contrast to classic WCCN, DWCCA is formulated as an interchangeable network component that can be directly incorporated inside a DNN. This has the advantages that it allows for training end-to-end with SGD and back-propagation and provide a better internal representation of the network and it allows for a joint optimization in an end-to-end neural network fashion. We showed that DWCCA achieved similar or superior performances while providing a representation with lower covariance.
Authors acknowledge Hasan Bahari of KU Leuven for helpful discussions about this work. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan X GPU used for this research.
-  Pang Wei Koh and Percy Liang, “Understanding black-box predictions via influence functions,” in Proceedings of the 34th International Conference on Machine Learning, 2017.
-  Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009, pp. 248–255.
-  Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen, “Tut database for acoustic scene classification and sound event detection,” in Signal Processing Conference (EUSIPCO), 2016 24th European. IEEE, 2016, pp. 1128–1132.
-  Hamid Eghbal-zadeh, Bernhard Lehner, Matthias Dorfer, and Gerhard Widmer, “Cp-jku submissions for dcase-2016: a hybrid approach using binaural i-vectors and deep convolutional neural networks,” 2016.
-  Victor Bisot, Romain Serizel, Slim Essid, and Gael Richard, “Supervised nonnegative matrix factorization for acoustic scene classification,” IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE), 2016.
-  Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen, “Assessment of human and machine performance in acoustic scene classification: Dcase 2016 case study,” IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2017.
-  Michele Valenti, Aleksandr Diment, Giambattista Parascandolo, Stefano Squartini, and Tuomas Virtanen, “Dcase 2016 acoustic scene classification using convolutional neural networks,” in Proc. Workshop Detection Classif. Acoust. Scenes Events, 2016, pp. 95–99.
-  Matthias Dorfer, Rainer Kelz, and Gerhard Widmer, “Deep Linear Discriminant Analysis,” International Conference on Learning Representations (ICLR), 2016.
-  Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu, “Deep canonical correlation analysis,” Proceedings of the 30th International Conference on Machine Learning (ICML), 2013.
-  A Hatch and A Stolcke, “Generalized linear kernels for one-versus-all classification: application to speaker recognition,” IEEE International Conference on Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings., 2006.
-  Najim Dehak, Patrick Kenny, Réda Dehak, Pierre Dumouchel, and Pierre Ouellet, “Front-end factor analysis for speaker verification,” Audio, Speech, and Language Processing, IEEE Transactions on, 2011.
Christopher J Leggetter and Philip C Woodland,
“Maximum likelihood linear regression for speaker adaptation of continuous density hidden markov models,”Computer Speech & Language, 1995.
Mohamad Hasan Bahari, Najim Dehak, Hugo Van Hamme, Lukas Burget, Ahmed M.
Ali, and Jim Glass,
“Non-negative factor analysis of Gaussian mixture model weight adaptation for language and dialect recognition,”IEEE Transactions on Audio, Speech and Language Processing, 2014.
-  Oren Barkan, Jonathan Weill, Lior Wolf, and Hagai Aronowitz, “Fast high dimensional vector multiplication face recognition,” in Proceedings of the IEEE International Conference on Computer Vision, 2013.
-  Xiaodan Zhuang, Modeling audio and visual cues for real-world event detection, Ph.D. thesis, University of Illinois at Urbana-Champaign, 2011.
-  Karen Simonyan and Andrew Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
Najim Dehak, Reda Dehak, James Glass, Douglas Reynolds, and Patrick Kenny,
“Cosine Similarity Scoring without Score Normalization Techniques,”Proceedings of Odyssey 2010 - The Speaker and Language Recognition Workshop (Odyssey), 2010.
-  Ming Li, Andreas Tsiartas, Maarten Van Segbroeck, and Shrikanth S Narayanan, “Speaker verification using simplified and supervised i-vector modeling,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 7199–7203.
-  Ao Hatch, “Within-class covariance normalization for SVM-based speaker recognition.,” Interspeech, 2006.
-  Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  Weiran Wang and Karen Livescu, “Large-scale approximate kernel canonical correlation analysis,” International Conference on Learning Representations (ICLR) (arXiv:1511.04773), 2015.
-  Sergey Ioffe and Christian Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” CoRR, 2015.
-  James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio, “Theano: a CPU and GPU math expression compiler,” in Proceedings of the Python for Scientific Computing Conference (SciPy), 2010, Oral Presentation.
-  Stephen P Smith, “Differentiation of the cholesky algorithm,” Journal of Computational and Graphical Statistics, 1995.
-  Matthias Zöhrer and Franz Pernkopf, “Gated recurrent networks applied to acoustic scene classification and acoustic event detection,” IEEE AASP Chall. Detect. Classif. Acoust. Scenes Events (DCASE), 2016, 2016.
-  Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals, “Understanding deep learning requires rethinking generalization,” arXiv preprint arXiv:1611.03530, 2016.
-  Yann Le Cun, Ido Kanter, and Sara A Solla, “Eigenvalues of covariance matrices: Application to neural-network learning,” Physical Review Letters, vol. 66, no. 18, pp. 2396, 1991.