As a number of companies worldwide are facing a shortage of skilled maintenance workers, the demand for an automatic sound-monitoring system has been increasing. Unsupervised anomaly detection methods are often adopted for this system since anomalous data can rarely be obtained [21, 12].
For unsupervised anomaly detection, generative models such as Variational Auto Encoder (VAE), which use approximate likelihood and reconstruction error to calculate anomaly scores, have been utilized. Normalizing Flows (NF) [22, 4] is another promising generative model thanks to its ability to perform exact likelihood estimation. However, this model fails at out-of-distribution detection since it assigns higher likelihood to smoother structured data [13, 2, 20].
Self-supervised classification-based approach is another way for detecting anomalies when sound data from multiple machines of the same machine type is available 
. In self-supervised learning, a model is trained for a main task with another task called an auxiliary task to improve the performance on the main task. In self-supervised classification-based approach, the auxiliary task is to train a classifier that predicts a machine ID assigned to each machine. If the classifier misclassifies the machine ID of a sound data, the sound data is regarded as anomalous. Although this approach improves the detection performance on average, it shows significantly low scores on some machines.
In this paper, we propose a self-supervised density estimation method using NF. Our method uses sound data from one machine ID to detect anomalies (target data) and sound data from other machines of the same machine type (outlier data), and the model is trained to assign higher likelihood to the target data and lower likelihood to the outlier data. This method is a self-supervised approach because it improves the detection performance on one machine ID by introducing an auxiliary task in which the model discriminates the sound data of that machine ID (target data) from sound data of other machine IDs with the same machine type (outlier data). Also, since the method increases the likelihood of the target data, this method can also be similar to the unsupervised approach.
We evaluated the detection performance of our method using six types of machine sound data. Glow  and MAF  were utilized as NF models. Experimental results showed that our method outperformed unsupervised approaches while showing more stable detection performances compared to the self-supervised classification-based approach.
2 Problem Statement
Anomalous sound detection is a task to identify whether a machine is normal or anomalous based on the anomaly score a trained model calculates from its sound. Each input sound data is determined as anomalous data if the anomaly score of the data exceeds a threshold value. We consider the unsupervised anomalous sound detection where only normal sound is available for training. We assume sound data of multiple machines with the same machine type is available. This is a realistic assumption since multiple machines of the same type are often installed in factories. This problem setting is the same as that in DCASE 2020 Challenge Task2 .
3 Relation to prior work
3.1 Improving the detection performance of NF
Researchers have proposed various methods to modify the likelihood assigned by NF models. Serra et al.  used input complexity to modify the likelihood. However, a compression algorithm has to be chosen to calculate the input complexity. Ren et al.  used the likelihood ratio between the likelihood of the input data and the background component, but parameters have to be tuned for modeling the background. Hendrycks et al. 
used auxiliary datasets of outliers to improve the detection performance. However, the proposed loss function can destabilize the detection performance. Kirichenko et al. proposed a loss function to distinguish the in-distribution data from out-of-distribution data by means of a supervised approach. The authors argued that this method cannot be used to detect out-of-distribution data not included in the training dataset. We show that this method can be used for detecting anomalies if sound data of the same machine type as the target data is used for the outlier data in the training dataset.
3.2 A self-supervised approach for anomaly detection
In DCASE 2020 Challenge Task2, top rankers trained classifiers to predict a machine ID of the data [5, 15]. This approach assumes that the classifier can output a false machine ID if the data is anomalous. Giri et al.  named this approach a self-supervised classification-based approach. However, we found that this approach can fail on some machine IDs, and the detection performance degrades significantly. To stabilize the detection performance, our proposed method incorporates the unsupervised approach by using the NLL to distinguish the target data from the outlier data.
4 conventional approach
Normalizing Flows (NF) is a series of invertible transformations between an input data distribution and a known distribution . Anomaly score can be calculated by the negative log likelihood (NLL) of the input data [19, 23, 17, 3]. However, this score is dependent on the smoothness of the input data and fails at out-of-distribution detection.
Kirichenko  proposed a loss function to distinguish the in-distribution data from the out-of-distribution data by using a supervised approach:
where is the number of the in-distribution data in each batch, is the number of the out-of-distribution data in each batch that satisfies the condition in the indicator function , and is a threshold value.
5 Proposed Approach
To overcome the problems of NF models and the self-supervised classification-based approach, our method attempts to assign higher likelihood to the target data and lower likelihood to the outlier data using NF.
We train a model for each machine ID, where the data with that ID is the target data and other machine sounds of the same machine type is the outlier data . Assume and consist of components specific to their machine IDs (, ) and components shared across the same machine type (). The likelihood of and can be written as
If the model is trained to assign higher likelihood to and lower likelihood to , only components specific to the machine ID (, ) will affect the likelihood. Therefore, we can improve the detection performance of a NF model by introducing an auxiliary task in which we train the NF model to discriminate the target data from the outlier data by the likelihood. When testing the model, the NLL is used as the anomaly score. The anomaly score will be higher when the data structure is different from the target data specific component or close to the outlier data . The former case corresponds to anomaly detection by the unsupervised approach and the latter case to the self-supervised approach. Therefore, this idea can benefit from both the unsupervised and the self-supervised classification-based approach. As illustrated in Fig.1, the model is trained to decrease the NLL until the NLL of the outlier data reaches a threshold . Then, the model is penalized so that the NLL of the outlier data can be higher than the target data.
This idea can be realized by using the loss function in (1), as
where is the number of the target data in each batch and is the number of the outlier data in each batch that satisfies the condition in . Threshold is decided so that the NLL of the outlier data does not go to . The difference between (1) and (4) is that the loss function in (1) uses out-of-distribution data that has completely different data structures from the in-distribution data, while the loss function in (4) uses the outlier data of the same machine type as the target data. In (4), the NLL of the target data and the NLL of the outlier data that satisfies the condition in have the same impact on the loss. In unsupervised anomaly detection tasks, lower NLL of the target data can lead to better detection performances. Therefore, we modified (4) so that the model can prioritize decreasing the NLL of the target data, as
where enhances decreasing the NLL of the target data. The second condition in is for removing the penalty if the NLL of the outlier data is larger than the maximum NLL of the target data in each batch. In this case, the distinction between the target data and the outlier data can be completely made by the NLL, and the model should only focus on decreasing the NLL of the target data.
6.1 Experimental conditions
We conducted experiments using DCASE 2020 Challenge Task2 development dataset. Each data is a 10-second single-channel 16-kHz recording from one of six machine types (ToyCar, ToyConveyor, fan, pump, slider, valve). It has a machine ID to distinguish three to four machines in each machine type.
To evaluate the anomaly detection performance of our proposed method, we used Glow and MAF as NF models. Glow and MAF are often chosen as a NF model in out-of-distribution detection and anomaly detection tasks . VAE and VIDNN  were used as conventional unsupervised approaches. MobileNetV2  was used as the classifier in the self-supervised classification-based approach.
For the input, the frames of the log-Mel spectrograms were computed with a length of 1024, hop size of 512, and 128 Mel bins. At least 313 frames were generated for each recording, and several frames were successively concatenated to make each input. Input details and the model architectures are described as follows.
VAE and VIDNN.
Five frames were used for each input, with four overlapping frames. We trained a model for each machine ID. The model had ten linear layers with 128 dimensions, except for the fifth layer, which had eight dimensions. The model was trained for 100 epochs using the Adam optimizer with a learning rate of .
MobileNetV2. 64 frames were used with 48 overlappings. We trained a model to classify machine IDs for each machine type. The width multiplier parameter was set to 0.5. The model was trained for 20 epochs with the Adam optimizer and a learning rate of .
Glow. 32 frames were used with 28 overlappings. We trained a model for each machine ID. The model had three blocks with 12 flow steps in each block, and each flow step had three CNN layers with 128 hidden layers. The model was trained for 100 epochs using the AdaMax optimizer  with a learning rate of and the batch size of 64. in (5) was set to for all machines so that the first term in (5) could be lower while the second term was not ignored. For in (5), we first used all data from a machine type to train the model for several epochs with the NLL as the loss, and then decided by the NLL in the last epoch. The detection performance was not severely affected by the number of epochs in the first step. This operation was performed for each machine type, and different values were set as shown in Table 1.
MAF. Four frames were used with no overlapping. We trained a model for each machine ID. The model had four MADE-blocks with 512 units for each block. The model was trained for 100 epochs using the Adam optimizer with a learning rate of and the batch size of 64. We set to for all machines, and to the values in Table 1.
6.2 Results and Discussions
We evaluated the detection performance on the area under the receiver operating characteristic curve (AUC) and partial AUC (pAUC) with. For NF models, we first used the conventional approach where the NLL was the loss, with additive/affine coupling layers and transformations (Glow add., Glow aff., MAF add., MAF aff.). We then used our proposed methods in (4) and (5). We trained a model for each machine ID, where the data from that machine ID is the target data and the data from other machine IDs with the same machine type is the outlier data. In Table 2, the average AUC and pAUC of each machine type for each model are listed. Our proposed methods outperformed all the conventional methods except for ToyConveyor. This result indicates that our method improves the detection performance by using the outlier data which can have similar data structures to the target data. For ToyConveyor, we found that clear distinctions between different IDs in ToyConveyor were made by the NLL. For example, in ToyConveyor ID 0, the NLL using Glow with affine coupling layers and the loss in (5) converged to for the target data and for the outlier data. We consider different machine IDs in ToyConveyor had completely different data structures, and this made our proposed methods ineffective for ToyConveyor 111In , we argued that the self-supervised classification-based approach shows low AUCs on ToyConveyor because machine IDs in ToyConveyor have very similar data structures, and the classifier cannot classify these IDs. However, the results in this paper show that the AUCs are low although these IDs can be distinguished easily. These results indicate that the cause for the low scores of the self-supervised classification-based approach is not that the machine IDs in ToyConveyor have very similar data structures.. The loss function in (5) shows almost the same performance as in (4). However, the NLL of the target data became lower with (5) than (4). For example, in ToyCar machine ID 0, the NLL of the target data using MAF converged to with (4) and to with (5). As described in , a lower NLL means the model can obtain accurate densities of the data, which can improve the detection performance of the unsupervised approach. Therefore, we can expect the loss function in (5) to show better results than (4) for other datasets. We also found that MAF with affine transformations could output infinities with the proposed loss function in (4) and (5). As previously pointed out , affine transformations have a numerical stability problem. The affine transformations slightly improved the detection performance (Table 2), while it simultaneously destabilized the model and required a higher computational cost.
|Model||VAE||VIDNN||MAF add.||MAF aff.||Glow add.||Glow aff.|
|Loss||Ours (eq.(4))||Ours (eq.(5))|
|Model||MAF add.||Glow add.||Glow aff.||MAF add.||Glow add.||Glow aff.|
|MobileNetV2||Glow aff.||Glow aff. (5)|
|pump (except ID 0)||74.2|
In Table 3, we present the results of a machine ID with minimum AUC for each machine type. The minimum AUC of the self-supervised classification-based approach was significantly lower than the unsupervised approach for three out of the six machine types, while that of our method was equal to or significantly higher than the unsupervised approach. These results indicate that our proposed method not only outperforms the unsupervised approach but also shows more stable detection performances than the self-supervised classification-based approach, and therefore can be suitable for practical applications.
We also show experimentally that our method does not improve the detection performance if the outlier data has different data structures from the target data. We used pump machine ID 0 as the target data and made six models in which the outlier data was from each machine type. The model was trained with the loss in (5). As shown in Table 4, the detection performance did not improve if the outlier data was from a machine type other than pump. These results indicate that our method does not improve the detection performance if the outlier data has different structures from the target data.
We proposed flow-based methods for anomalous sound detection that outperform unsupervised approaches while maintaining greater stability than the self-supervised classification-based approach. Experimental results demonstrated that our methods improve the detection performance by using the outlier data from the same machine type as the target data. Our future work will include the development of more efficient methods to decide the hyperparameters.
Understanding and mitigating exploding inverses in invertible neural networks. Note: arXiv:2006.09347 Cited by: §6.2.
-  (2019) WAIC, but why? generative ensembles for robust anomaly detection. Note: arXiv:1810.01392 Cited by: §1.
-  (2020) Anomaly detection in trajectory data with normalizing flows. In IJCNN, Cited by: §4.
-  (2015) NICE: non-linear independent components estimation. Note: arXiv:1410.8516 Cited by: §1.
Unsupervised anomalous sound detection using self-supervised classification and group masked autoencoder for density estimation. Technical report DCASE2020 Challenge. Cited by: §1, §3.2.
-  (2020) Anomalous sound detection with masked autoregressive flows and machine type dependent postprocessing. Technical report DCASE2020 Challenge. Cited by: §6.1.
-  (2019) Deep anomaly detection with outlier exposure. In ICLR, Cited by: §3.1.
-  (2015) Adam: A method for stochastic optimization. In ICLR, Cited by: §6.1, §6.1.
-  (2018) Glow: generative flow with invertible 1x1 convolutions. In NeurIPS, pp. 10215–10224. Cited by: §1.
-  (2020) Why normalizing flows fail to detect out-of-distribution data. In ICML workshop on Invertible Neural Networks and Normalizing Flows, Cited by: §3.1, §4.
-  (2020) Description and discussion on DCASE2020 Challenge task2: unsupervised anomalous sound detection for machine condition monitoring. In DCASE Workshop, Cited by: §2, footnote 1.
-  (2017) Optimizing acoustic feature extractor for anomalous sound detection based on neyman-pearson lemma. In EUSIPCO, Vol. , pp. 698–702. Cited by: §1.
-  (2019) Do deep generative models know what they don’t know?. In ICLR, Cited by: §1.
-  (2017) Masked autoregressive flow for density estimation. In NeurIPS, pp. 2338–2347. Cited by: §1, §6.2.
-  (2020) Reframing unsupervised machine condition monitoring as a supervised classification task with outlier-exposed classifiers. Technical report DCASE2020 Challenge. Cited by: §3.2.
-  (2019) Likelihood ratios for out-of-distribution detection. In NeurIPS, pp. 14707–14718. Cited by: §3.1.
-  (2019) Normalizing flows for deep anomaly detection. Note: arXiv:1912.09323 Cited by: §4.
-  (2018) MobileNetV2: inverted residuals and linear bottlenecks. In CVPR, pp. 4510–4520. Cited by: §6.1.
Normalizing flows for novelty detection in industrial time series data. In ICML workshop on Invertible Neural Networks and Normalizing Flows, Cited by: §4.
-  (2020) Input complexity and out-of-distribution detection with likelihood-based generative models. In ICLR, Cited by: §1, §3.1.
Anomalous sound detection based on interpolation deep neural network. In ICASSP, Vol. , pp. 271–275. Cited by: §1, §6.1.
-  (2013) A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics 66, pp. 145–164. Cited by: §1.
-  (2019) AdaFlow: domain-adaptive density estimator with application to anomaly detection and unpaired cross-domain translation. In ICASSP, pp. 3647–3651. Cited by: §4.