MIMII Dataset: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection

09/20/2019
by   Harsh Purohit, et al.
hitachi
0

Factory machinery is prone to failure or breakdown, resulting in significant expenses for companies. Hence, there is a rising interest in machine monitoring using different sensors including microphones. In the scientific community, the emergence of public datasets has led to advancements in acoustic detection and classification of scenes and events, but there are no public datasets that focus on the sound of industrial machines under normal and anomalous operating conditions in real factory environments. In this paper, we present a new dataset of industrial machine sounds that we call a sound dataset for malfunctioning industrial machine investigation and inspection (MIMII dataset). Normal sounds were recorded for different types of industrial machines (i.e., valves, pumps, fans, and slide rails), and to resemble a real-life scenario, various anomalous sounds were recorded (e.g., contamination, leakage, rotating unbalance, and rail damage). The purpose of releasing the MIMII dataset is to assist the machine-learning and signal-processing community with their development of automated facility maintenance. The MIMII dataset is freely available for download at: https://zenodo.org/record/3384388

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

08/09/2019

ToyADMOS: A Dataset of Miniature-Machine Operating Sounds for Anomalous Sound Detection

This paper introduces a new dataset called "ToyADMOS" designed for anoma...
01/14/2022

Anomalous Sound Detection using Spectral-Temporal Information Fusion

Unsupervised anomalous sound detection aims to detect unknown abnormal s...
06/27/2020

Anomalous Sound Detection using unsupervised and semi-supervised autoencoders and gammatone audio representation

Anomalous sound detection (ASD) is, nowadays, one of the topical subject...
11/21/2021

Health Monitoring of Industrial machines using Scene-Aware Threshold Selection

This paper presents an autoencoder based unsupervised approach to identi...
10/10/2021

Crack detection using tap-testing and machine learning techniques to prevent potential rockfall incidents

Rockfalls are a hazard for the safety of infrastructure as well as peopl...
02/02/2021

AURSAD: Universal Robot Screwdriving Anomaly Detection Dataset

Screwdriving is one of the most popular industrial processes. As such, i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The increasing demand for automatic machine inspection stems from the need for a better quality of factory equipment maintenance. The discovery of malfunctioning machine parts mainly depends on the experience of the field engineer, but currently there is a shortage of field experts due to the increased number of requests for inspection. An efficient and affordable solution to this problem is urgently required.

In the past decade, industrial Internet of Things (IoT) and data-driven techniques have been revolutionizing the manufacturing industry, and different approaches have been undertaken for monitoring the state of machinery. Examples include vibration sensor-based approaches [1, 2, 3, 4], temperature sensor-based approaches [5], and pressure sensor-based approaches [6]. Another approach is to detect anomalies from sound by using technologies for acoustic scene classification and event detection [7, 8, 9, 10, 11, 12, 13]. Remarkable advancements have been made in the classification of acoustic scenes and the detection of acoustic events, and there are many promising state-of-the-art studies in this vein [14, 15, 16]. It is clear that the emergence of numerous open benchmark datasets [17, 18, 19, 20] is essential for the advancement of the research field. However, to the best of our knowledge, there is no public dataset that contains different types of machine sounds in real factory environments.

In this paper, we introduce a new dataset of machine sounds under normal and anomalous operating conditions in real factory environments. We include the sound of four machine types—(i) valves, (ii) pumps, (iii) fans, and (iv) slide rails—and for each type of machine, we consider seven different product models. We assume that the main task is to find an anomalous condition of the machine during a 10-second sound segment in an unsupervised learning situation. In other words, only normal machine sounds can be used in the training phase, and we have to correctly distinguish between a normal machine sound and an abnormal machine sound in the test phase. The main contributions of this paper are as follows: (1) We created an open dataset for malfunctioning industrial machine investigation and inspection (MIMII), the first of its kind. We have released this dataset, and it is freely available for download at

https://zenodo.org/record/3384388

. This dataset contains 26,092 sound files for normal conditions of four different machine types. It also contains real-life anomalous sound files for each category of the machines. (2) Using our developed dataset, we have explored an autoencoder-based model for each type of machine with various noise conditions. These results can be taken as a benchmark to improve the accuracy of anomaly detection in the MIMII dataset.

In Section 2 of this paper, we describe our recording environment and the setup. The details of the dataset content are provided in Section 3. The autoencoder-based detection benchmark and results are discussed in Section 4. We conclude in Section 5 with a brief summary and mention of future work.

2 Recording Environment and Setup

Figure 1: Circular microphone array.
Figure 2: Schematic experimental setup for dataset recording.

The dataset was collected using a TAMAGO-03 microphone manufactured by System In Frontier Inc. [21]. It is a circular microphone array that consists of eight distinct microphones, the details of which are shown in Fig. 1. By using this microphone array, we can evaluate not only single-channel-based approaches but also multi-channel-based ones. The microphone array was kept at a distance of 50 cm from the machine (10 cm in the case of valves), and 10-second sound segments were recorded. The dataset contains eight separate channels for each segment. Figure 2 depicts the recording setup with the direction and distance for each kind of machine. Note that each machine sound was recorded in a separate session. Under the running condition, the sound of the machine was recorded as 16-bit audio signals sampled at 16 kHz in a reverberant environment. Apart from the target machine sound, background noise in multiple real factories was continuously recorded and later mixed with the target machine sound for simulating real environments. For recording the background noise, we used the same microphone array as for the target machine sound.

3 Dataset Content

The MIMII dataset contains the sound of four different types of machines: valves, pumps, fans, and slide rails. The valves are solenoid valves that are repeatedly opened and closed. The pumps are water pumps that drain water from a pool and discharge water to the pool continuously. The fans represent industrial fans, which are used to provide a continuous flow of gas or air in factories. The slide rails in this paper represent linear slide systems, which consist of a moving platform and a stage base. The types of the sounds produced by the machines are stationary and non-stationary, have different features, and have different degrees of difficulty. Figure 3 depicts a power spectrogram of the sound of all four types of machines, clearly showing that each machine has its unique sound characteristics.


(a) Valve (model ID: 00)

(b) Pump (model ID: 00)

(c) Fan (model ID: 00)

(d) Slide rail (model ID: 00)
Figure 3: Examples of power spectrograms under normal condition at -dB SNR.

The list of sound files for each machine type is provided in Table 1. Each type of machine includes seven individual machines. Individual machines may be of a different product model. We know that large datasets incorporating real-life complexity are needed to effectively train the models, so we recorded a total of 26,092 normal sound segments for all individual machines. In addition to this, different real-life anomalous scenarios have been considered for each kind of machine: contamination, leakage, rotating unbalance, rail damage, etc. The various running conditions are listed in Table 2. The number of sound segments for each anomalous sound for each different type of machine is small because we regard the main target of our dataset as an unsupervised learning scenario and regard the anomalous segments as a part of the test data.

Machine type /
model ID
Segments
for normal
condition
Segments
for anomalous
condition
Valve 00
01
02
03
04
05
06
Pump 00
01
02
03
04
05
06
Fan 00
01
02
03
04
05
06
Slide rail 00
01
02
03
04
05
06
Total
Table 1: MIMII dataset content details.
Machine
type
Operations
Examples of
anomalous
conditions
Valve
Open / close repeat
with different timing
More than
two kinds of
contamination
Pump
Suction from /
discharge to
a water pool
Leakage,
contamination,
clogging, etc.
Fan
Normal operation
Unbalanced,
voltage change,
clogging, etc.
Slide rail
Slide repeat at
different speeds
Rail damage,
loose belt,
no grease, etc.
Table 2: List of operations and anomalous conditions.

As explained in Section 2, the background noise recorded in multiple real factories was mixed with the target machine sound. Eight channels are considered separately when mixing the original sounds with the noise. For a certain signal-to-noise ratio (SNR) dB, the noise-mixed data of each machine model were created by the following steps:

  1. The average power over all segments of the machine models, , was calculated.

  2. For each segment from the machine model,

    1. a background-noise segment is randomly selected, and its power is tuned so that ; and

    2. the noise-mixed data is calculated by adding the target-machine segment and the power-tuned background-noise segment .

4 Experiment

An example of benchmarking is presented in this section. Our main goal is to detect anomalous sounds in an unsupervised learning scenario, as discussed in Section 1. Several studies have successfully used autoencoders for unsupervised anomaly detection [22, 23, 12, 24], so here, we evaluate an autoencoder-based unsupervised anomaly detector.

We used only the first channel of microphones (“No. 1” in Fig. 1

). We consider log-Mel spectrogram as an input feature. To calculate the Mel spectrogram, we consider a frame size of 1024, a hop size of 512, and 64 mel filters in this experiment. Five frames have been combined to initiate our 320 dimensional input feature vector

. The parameters of the encoder and decoder neural networks (i.e.,

) are trained to minimize the loss function given as

(1)

Our basic assumption is that this trained model will have a high reconstruction error for anomalous machine sounds. The autoencoder network structure for the experiment is summarized as follows. The encoder network () comprises ; ; and , and the decoder network () incorporates ; ; and , where means a fully connected layer with

input neurons,

output neurons, and activation function

. The ReLUs are Rectified Linear Units

[25]. The network is trained by the Adam [26]

optimization technique for 50 epochs.

For each machine type and model ID, all the segments were split into a training dataset and a test dataset. All the anomalous segments were regarded as the test dataset, the same number of normal segments was randomly selected and regarded as the test dataset, and all the rest of the normal segments were regarded as the training dataset. By using the training dataset consisting only of normal ones, different autoencoders were trained for each machine type and model ID. Anomaly detection was performed for each segment by thresholding the reconstruction error averaged over ten seconds, and the area under the curve (AUC) values were calculated for the test dataset for each machine type and model ID. In addition to this, we considered different levels of SNR (with factory noise): for example, dB, dB, and dB.

Table 3 lists the AUCs averaged over three training runs with independent initializations. It is clear here that the AUCs for valves are lower than the other machines. Sound signals of valves are non-stationary—in particular, impulsive and sparse in time—and the reconstruction error averaged over time tends to be small. That makes it difficult to detect anomalies for valves. In contrast, it is easier to detect anomalies for fans, as the sound signals of fans are stationary. Moreover, for some machine models, the AUC decreases rapidly as the noise level increases. These results indicate that we need to solve the degradation caused by non-stationarity and noise for unsupervised anomalous sound detection.

Machine type /
model ID
Input SNR
dB dB dB
Valve 00
01
02
03
04
05
06
Avg.
Pump 00
01
02
03
04
05
06
Avg.
Fan 00
01
02
03
04
05
06
Avg.
Slide rail 00
01
02
03
04
05
06
Avg.
Table 3: AUCs for all machines.

5 Conclusion and Future Directions

In this paper, we introduced the MIMII dataset, a real-world dataset for investigating the malfunctioning behavior of industrial machines. We collected 26,092 sound segments of normal condition and 6,065 sound segments of anomalous condition and mixed the background noise recorded in multiple real factories with the machine-sound segments for simulating real environments. In addition, using the MIMII dataset, we presented our evaluation for autoencoder-based unsupervised anomalous sound detection. We observed that non-stationary machine sound signals and noise are the key issues to be overcome in the development of an unsupervised anomaly detector. These results can be taken as a benchmark to improve the accuracy of anomaly detection in the MIMII dataset.

The MIMII dataset is freely available for download at https://zenodo.org/record/3384388. To the best of our knowledge, this dataset is the first of its kind to address the problem of detecting anomalous conditions in industrial machinery through machine sounds. As benchmarking is an important aspect in data-driven methods, we believe that our MIMII dataset will be very useful to the research community. We are releasing this data to accelerate research in the area of audio event detection, specifically for machine sounds. This dataset can be applied to other use cases as well: for example, to restrict the training on a specific number of machine models and then test on the remaining machine models. This study will be useful for measuring the domain adaptation capability of the different methods applied on machines from different manufacturers. If the community takes an interest in our dataset and validates its usage, we will improve the current version with additional meta-data related to different anomalies.

References

  • [1] M. Yu, D. Wang, and M. Luo, “Model-based prognosis for hybrid systems with mode-dependent degradation behaviors,” IEEE Transactions on Industrial Electronics, vol. 61, no. 1, pp. 546–554, 2013.
  • [2] T. Ishibashi, A. Yoshida, and T. Kawai, “Modelling of asymmetric rotor and cracked shaft,” in Proceedings of the 2nd Japanese Modelica Conference, no. 148, 2019, pp. 180–186.
  • [3] E. P. Carden and P. Fanning, “Vibration based condition monitoring: A review,” Structural health monitoring, vol. 3, no. 4, pp. 355–377, 2004.
  • [4] G. S. Galloway, V. M. Catterson, T. Fay, A. Robb, and C. Love, “Diagnosis of tidal turbine vibration data through deep neural networks,” in Proceedings of the 3rd European Conference of the Prognostics and Health Management Society, 2016.
  • [5] G. Lodewijks, W. Li, Y. Pang, and X. Jiang, “An application of the IoT in belt conveyor systems,” in Proceedings of the International Conference on Internet and Distributed Computing Systems (IDCS), 2016, pp. 340–351.
  • [6] R. F. Salikhov, Y. P. Makushev, G. N. Musagitova, L. U. Volkova, and R. S. Suleymanov, “Diagnosis of fuel equipment of diesel engines in oil-and-gas machinery and facilities,” AIP Conference Proceedings, vol. 2141, no. 1, p. 050009, 2019.
  • [7] Y. Koizumi, S. Murata, N. Harada, S. Saito, and H. Uematsu, “SNIPER: Few-shot learning for anomaly detection to minimize false-negative rate with ensured true-positive rate,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 915–919.
  • [8] Y. Kawachi, Y. Koizumi, S. Murata, and N. Harada, “A two-class hyper-spherical autoencoder for supervised anomaly detection,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 3047–3051.
  • [9]

    M. Yamaguchi, Y. Koizumi, and N. Harada, “AdaFlow: Domain-adaptive density estimator with application to anomaly detection and unpaired cross-domain translation,” in

    Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 3647–3651.
  • [10] Y. Kawaguchi, R. Tanabe, T. Endo, K. Ichige, and K. Hamada, “Anomaly detection based on an ensemble of dereverberation and anomalous sound extraction,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 865–869.
  • [11]

    Y. Koizumi, S. Saito, H. Uematsu, Y. Kawachi, and N. Harada, “Unsupervised detection of anomalous sound based on deep learning and the Neyman–Pearson lemma,”

    IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, no. 1, pp. 212–224, 2018.
  • [12] Y. Kawaguchi and T. Endo, “How can we detect anomalies from subsampled audio signals?” in Proceedings of the IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP), 2017, pp. 1–6.
  • [13] Y. Kawaguchi, “Anomaly detection based on feature reconstruction from subsampled audio signals,” in Proceedings of the European Signal Processing Conference (EUSIPCO), 2018, pp. 2524–2528.
  • [14] A. Mesaros, T. Heittola, E. Benetos, P. Foster, M. Lagrange, T. Virtanen, and M. D. Plumbley, “Detection and classification of acoustic scenes and events: Outcome of the DCASE 2016 Challenge,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 26, no. 2, pp. 379–393, 2018.
  • [15]

    S. S. R. Phaye, E. Benetos, and Y. Wang, “SubSpectralNet–using sub-spectrogram based convolutional neural networks for acoustic scene classification,” in

    Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 825–829.
  • [16] Z. Podwinska, I. Sobieraj, B. M. Fazenda, W. J. Davies, and M. D. Plumbley, “Acoustic event detection from weakly labeled data using auditory salience,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 41–45.
  • [17] J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter, “Audio Set: An ontology and human-labeled dataset for audio events,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 776–780.
  • [18] E. Fonseca, J. Pons, X. Favory, F. Font, D. Bogdanov, A. Ferraro, S. Oramas, A. Porter, and X. Serra, “Freesound datasets: A platform for the creation of open audio datasets,” in Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2017, pp. 486–493.
  • [19] G. Dekkers, S. Lauwereins, B. Thoen, M. W. Adhana, H. Brouckxon, B. V. den Bergh, T. van Waterschoot, B. Vanrumste, M. Verhelst, and P. Karsmakers, “The SINS database for detection of daily activities in a home environment using an acoustic sensor network,” in Proceedings of the Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), 2017, pp. 32–36.
  • [20] Y. Koizumi, S. Saito, H. Uematsu, N. Harada, and K. Imoto, “ToyADMOS: A dataset of miniature-machine operating sounds for anomalous sound detection,” in Proceedings of the Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2019, to appear.
  • [21] System In Frontier Inc. (http://www.sifi.co.jp/system/modules/pico/index.php?content˙id=39&ml˙lang=en).
  • [22]

    T. Tagawa, Y. Tadokoro, and T. Yairi, “Structured denoising autoencoder for fault detection and analysis,” in

    Proceedings of the Asian Conference on Machine Learning (ACML), 2015, pp. 96–111.
  • [23]

    E. Marchi, F. Vesperini, F. Eyben, S. Squartini, and B. Schuller, “A novel approach for automatic acoustic novelty detection using a denoising autoencoder with bidirectional LSTM neural networks,” in

    Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 1996–2000.
  • [24] D. Oh and I. Yun, “Residual error based anomaly detection using auto-encoder in SMD machine sound,” Sensors, vol. 18, no. 5, p. 1308, 2018.
  • [25] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “What is the best multi-stage architecture for object recognition?” in

    Proceedings of the 12th IEEE International Conference on Computer Vision (ICCV)

    , 2009, pp. 2146–2153.
  • [26] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.