CL4AC: A Contrastive Loss for Audio Captioning

07/21/2021
by   Xubo Liu, et al.
University of Surrey
0

Automated Audio captioning (AAC) is a cross-modal translation task that aims to use natural language to describe the content of an audio clip. As shown in the submissions received for Task 6 of the DCASE 2021 Challenges, this problem has received increasing interest in the community. The existing AAC systems are usually based on an encoder-decoder architecture, where the audio signal is encoded into a latent representation, and aligned with its corresponding text descriptions, then a decoder is used to generate the captions. However, training of an AAC system often encounters the problem of data scarcity, which may lead to inaccurate representation and audio-text alignment. To address this problem, we propose a novel encoder-decoder framework called Contrastive Loss for Audio Captioning (CL4AC). In CL4AC, the self-supervision signals derived from the original audio-text paired data are used to exploit the correspondences between audio and texts by contrasting samples, which can improve the quality of latent representation and the alignment between audio and texts, while trained with limited data. Experiments are performed on the Clotho dataset to show the effectiveness of our proposed approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/09/2020

Multi-task Regularization Based on Infrequent Classes for Audio Captioning

Audio captioning is a multi-modal task, focusing on using natural langua...
01/10/2022

Local Information Assisted Attention-free Decoder for Audio Captioning

Automated audio captioning (AAC) aims to describe audio data with captio...
08/05/2021

An Encoder-Decoder Based Audio Captioning System With Transfer and Reinforcement Learning

Automated audio captioning aims to use natural language to describe the ...
05/12/2022

Automated Audio Captioning: an Overview of Recent Progress and New Challenges

Automated audio captioning is a cross-modal translation task that aims t...
04/18/2022

Caption Feature Space Regularization for Audio Captioning

Audio captioning aims at describing the content of audio clips with huma...
04/17/2018

Bootstrapping Generators from Noisy Data

A core step in statistical data-to-text generation concerns learning cor...
10/22/2020

Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning

Most of existing audio fingerprinting systems have limitations to be use...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automated Audio captioning (AAC) is a cross-modal translation task of generating a natural language description for an audio clip. It has various potential applications. For example, AAC can be used for generating subtitles for the audio content in a television program, or for generating text descriptions of audio to help the hearing impaired in accessing audio content. It can also be used by sound search engines to achieve more accurate retrieval and recommendation, or by a surveillance system to facilitate the detection of acoustic anomalies. The AAC problem has attracted increasing interest from the acoustic signal processing and machine learning communities in recent years.

Existing AAC systems are usually based on an encoder-decoder architecture [5, 27, 2, 14]. The audio data is encoded into a latent representation and aligned with its corresponding text description. Then a decoder is used to generate the captions. Training of an AAC system often encounters the problem of data scarcity, which may lead to inaccurate representation and audio-text alignment. For example, Clotho [6] is a popular AAC dataset and was used for the DCASE challenge. However, it only contains 6974 audio samples, and each audio sample has five captions. To address this problem, information from keywords has been exploited for AAC [14, 26, 7]

. The keywords of the caption are tagged firstly and then used to assist the generation of captions. However, due to the diversity of keywords, the tagging results of unseen audio samples may not be accurate in the inference stage. On the other hand, transfer learning techniques

[20, 29] have been widely used in task 6 of the DCASE 2021 challenge, offering substantially improved performance. However, transfer learning relies heavily on large-scale external data [12] and pre-trained models [15].

Contrastive learning [23, 11]

is a self-supervised paradigm that helps the model obtain high-quality representation. Inspired by the recent success of contrastive learning in computer vision (CV)

[3]

and natural language processing (NLP)

[8, 10], we propose a novel encoder-decoder framework called Contrastive Loss for Audio Captioning (CL4AC). In CL4AC, the self-supervision signals derived from the original audio-text paired data are used to exploit the correspondences between audio and texts by contrasting samples. More precisely, we construct mismatched audio-text pairs as negative samples. Then, a contrastive learning objective is designed to maximize the difference between the representation of the matched audio-caption pair derived from the negative pairs. In this way, the quality of latent representation and the alignment between audio and texts can be improved, when they are trained with limited amount of data. To the best of our knowledge, contrastive learning has not been used for AAC in the literature.

The remainder of this paper are organised as follows. We introduce our proposed CL4AC in Section 2. Experiments are described in Section 3. Results are shown in Section 4. Finally, we conclude our work and discuss the future work in Section 5. The code of this work is made available on GitHub111https://github.com/liuxubo717/contrastive_loss_for_audio_captioning.

2 Contrastive Loss for Audio Captioning

In this section, we present our proposed contrastive learning framework for audio captioning (CL4AC). We first introduce the encoder-decoder architecture of CL4AC in Section 2.1. Then, we present the contrastive learning framework in Section 2.2.

2.1 Encoder-Decoder architecture

We first define the notations used in this section. The training data for AAC consists of paired audio and texts data. We denote a training set of audio-text pairs by , where is the log mel-spectrogram of an audio clip with and being its height and width, respectively, is the token sequence of a caption where is the -th token in the caption having tokens, is the log mel-spectrogram of the -th audio clip in the dataset, and is the token sequence of the -th caption in the dataset.

The sequence-to-sequence architecture with Convolutional Neural Network (CNN) encoder and Transformer decoder are used as the basis of our proposed framework, as shown in Figure 1. This architecture was shown to offer the state-of-the-art performance

[20, 29] in Task 6 of the DCASE 2021 challenge.

Figure 1: Sequence-to-sequence architecture with CNN encoder and Transformer decoder for audio captioning. The components in the dashed box indicate the Transformer decoder.

2.1.1 CNN encoder

Pre-trained audio neural networks (PANNs)

[15] have demonstrated a powerful ability in extracting latent representation of audio signals for different downstream audio recognition tasks. To benefit from its high-quality audio representation, we choose PANNs as the encoder, which will be described in Section 3.3 in details. The PANNs encoder takes the log mel-spectrogram of an audio clip as the input and extracts its latent representation . Formally:

(1)

2.1.2 Transformer decoder

The Transformer model has shown the state-of-the-art performance on language-related cross-modal task [4, 17], and is used as the decoder in our work. There are two main components in the decoder. Firstly, each token in the input token sequence is converted into a word embedding , where is the dimension of the word embedding, by the word2vec algorithm using Continuous Bag of Words Model (CBOW) [21] and Skip-Gram [22] model trained purely on the caption corpus. Then the word embedding of tokens are fed into the first self-attention layer to obtain their hidden states. The latent representation of an audio clip extracted by the encoder is aligned and calculated with the hidden states of tokens, then the audio-text representation is obtained by the transformer decoder, denoted as , which consists of vectors , where the number of vectors is equal to the length of the input token sequence and the dimension of each vector is . The vector of the audio-text representation is calculated based on the word embeddings and the audio latent representation. Hence, each corresponds to the token in the input token sequence

one-to-one, which can be used to predict the probability of the word over the vocabulary after it is passed through the final linear layer with softmax function. The transformer decoder predicts the

-th word based on the previous tokens and the audio latent representation , as follows,

(2)

The training objective is to optimize the cross entropy (CE) loss defined in terms of the predicted words as:

(3)

2.2 Contrastive learning framework

To obtain accurate audio-text representation while the model is trained with limited data, we use the self-supervised signal derived from the audio-text training data by contrasting samples. First, we construct mismatched audio-text pairs as negative samples. Then, a contrasting auxiliary task is designed to maximize the difference between the representation of the matched audio-text pair derived from negative pairs. The representations of the audio-text paired data are pulled together in the latent space while simultaneously pushing apart clusters of unpaired negative data by contrastive learning, as shown in Figure 2. In this way, the quality of audio-text representation and the alignment between audio and texts can be improved.

Figure 2: The representations of the audio-text paired data are pulled together in the latent space while simultaneously pushing apart clusters of unpaired negative data by Contrastive Learning (CL).
Figure 3: Contrastive loss for audio captioning (CL4AC) framework. The dashed lines indicate that the vector of the audio-text representation is calculated based on the word embeddings and the audio latent representation obtained from PANNs. The last audio-text representation vector

is fed to the classifier

whose output is used to calculate the Contrastive Learning (CL) loss.
Example paired caption unpaired caption
Something goes round that is playing its song The Air is blowing some what fast outside
At the fair, music is playing near a carousel through the speaker A hand held sander was used as various speeds
audio Chiming of bells, whistles and horns at a performance A hard gravel ground is walked on by someone
Fair kind music is being played at the circus grounds A person using a hard object to tap and scrape glasses
Polka or fair kind of music is being played The wind is blowing and the waves are flowing
Table 1: Examples of paired audio-text training data and negative training sample . Examples are selected from the Clotho dataset, where each audio data has five corresponding captions.

.

More specifically, for each anchor audio-text paired training data , we replace the caption by which is a randomly selected caption unpaired with in the training set . Then, the mismatched audio-text pair as the negative training sample is constructed, denoted as . Table 1 shows the examples of and in the Clotho dataset. Since the last vector in the audio-text representation is able to attend the context of all input tokens and the audio feature, the value of last vector of is fed into a binary classifier to predict whether the input audio and text data are paired or not . The contrastive learning (CL) loss for this auxiliary task is defined as follows:

(4)

where is the extended training set by merging the negative samples into the original training set and is the audio-text pair drawn from . The full training objective of CL4AC is:

(5)

When the input is a negative audio-text pair, the gradient provided by the CE loss is meaningless, for this case, only CL loss is used for updating the model. The framework of CL4AC is shown in Figure 3.

3 Experiments

3.1 Dataset

Clotho [6] is an AAC dataset whose sound clips are from the Freesound platform and annotated by Amazon Mechanical Turk. Clotho v2 was released for Task 6 of the DCASE 2021 Challenge, which contains , and audio clips for the development, validation and evaluation split respectively. The sampling rate of all audio clips in Clotho dataset is . Each audio clip has five captions. Audio clips are of to s duration and captions are eight to words long. We merge the development and validation split, forming a new training set with audio clips. The performance of AAC system is evaluated on the evaluation split.

3.2 Data pre-processing

We use the original sampling rate to load audio data, and an

-dimensional log mel-spectrogram is calculated using the short-time Fourier transform (STFT) with a frame size of

samples, a hop size of samples, and a Hanning window. SpecAugment [25] is used for data augmentation.

We transform all captions in the Clotho dataset to lower case with punctuation removed. Two special tokens “<sos>” and “<eos>” are added on the start and end of each caption. The vocabulary of the Clotho dataset contains words.

3.3 Model implementation

CNN-10 of PANNs [15] is used as the encoder to prevent over-fitting while trained with limited data. Specifically, the CNN-10 consists of four convolutional blocks where each has two convolutional layers with a kernel size of

. Batch normalization and ReLU are used after each convolutional layer. The channels number of each block are

, , and , respectively. An average pooling layer with kernel size is applied between them for down-sampling. Global average pooling is applied along the frequency axis after the last convolutional block followed by two fully connected layers to align the dimension of the output with the decoder input. Two transformer blocks with four heads and hidden units are used as the decoder. The implementation for the encoder and decoder is the same as that in our DCASE 2021 Challenge system222https://github.com/XinhaoMei/DCASE2021_task6_v2, which is the highest-scoring system without using model ensembles.

We trained the proposed model using Adam [13] optimizer with a batch size of . Warm-up is used in the first epochs to increase the learning rate to the initial learning rate linearly. The learning rate is then decreased to of itself every epochs. Dropout with a rate of is applied in the proposed model to mitigate the over-fitting problem. We train the model for 30 epochs with an initial learning rate of on the training set of the Clotho dataset.

Model BLEU BLEU BLEU BLEU ROUGE METERO CIDEr SPICE SPIDEr
Baseline 0.550 0.345 0.222 0.139 0.372 0.169 0.356 0.115 0.235
CL4AC 0.553 0.349 0.226 0.143 0.374 0.168 0.368 0.115 0.242
Table 2:

Performance of models on the Clotho v2 evaluation set. Baseline: baseline system described in Section 3.4, which is similar to our DCASE submitted system but without transfer learning and reinforcement learning techniques. CL4AC: Proposed framework Contrastive Loss for Audio Captioning (CL4AC). During the inference stage, captions are generated using greedy search.

3.4 Baseline system

The baseline system is similar to our DCASE 2021 system which uses transfer learning (TL) from external dataset and reinforcement learning (RL) [20]. Our motivation is to address the data scarcity problem for AAC, so we train the baseline without using TL from external dataset. Previous studies [30] proved that although RL techniques can optimize neural networks towards non-differentiable metrics, they may generate syntactically incorrect and incomplete captions. Thus, RL is also removed in the baseline system. The hyper-parameters used for training the baseline system are similar to the proposed model (mentioned in Section 3.3), except that the training batch size is 32 and the initial learning rate is .

3.5 Evaluation

During the inference stage, the mel-spectrogram of an audio clip along with the special token “<sos>” are fed into the encoder and decoder separately to generate the first token. Afterwards, the following tokens are predicted in terms of the previously generated tokens until the token “<eos>” or the maximum length (35 words in our experiments) is reached. The greedy search strategy is used to generate captions.

We evaluate the performance of the proposed framework using the same metrics adopted in Task 6 of the DCASE 2021 Challenge, including machine translation metrics: BLEU [24], METEOR [16], ROUGE [18] and captioning metrics: CIDEr [28], SPICE [1], SPIDEr [19]. BLEU

measures the quality of the generated text by calculating the precision of

-gram inside the text, which is an inexpensive metric to measure the correspondence between generated text and the ground truth. Generally, the higher BLEU

usually implies better precision and fluent text. The SPIDEr, a combination of SPICE and CIDEr, is designed for image captioning task measurement, which considers scene graph inside the generated caption and the term frequency-inverse document frequency (TF-IDF) of the

-gram. By considering the scene graph and the TF-IDF of -gram, the metric will focus on the relationships among objects and the text’s property, which ensures the semantic fidelity to the audio and the syntactical fluency of the language.

4 Results

Table 2 shows the performance of our proposed method on the Clotho v2 evaluation set. By adopting the contrastive loss technique during the training process, all the metrics except METERO increased on the evaluation set. For BLEU, BLEU, BLEU, BLEU, the relative improvement percentages for contrastive loss are 0.55%, 1.16%, 1.80%, and 2.88%, respectively. The in BLEU means the -grams matching between the predicted results and ground truths. The ascending increases of the relative improvement from BLEU to BLEU show that our proposed method generates more matching -grams, demonstrating a more fluent and better quality captioning result. Besides, CIDEr and SPIDEr, the captioning metrics, obtained 3.37% and 2.98% relative improvement correspondingly. The better CIDEr and SPIDEr ensure the captions are better semantically faithful to the audio clip with the better language fluency. Numerical improvement of the machine translation and captioning metrics shows the effectiveness of CL4AC while trained with limited data.

5 Conclusions

This paper demonstrated the problem of data scarcity for AAC, which may lead to the inaccurate representation and audio-text alignment. To alleviate this issue, a novel encoder-decoder framework called Contrastive Loss for Audio Captioning (CL4AC) was proposed to learn a better cross-modal representation. In CL4AC, the self-supervision signals derived from the original audio-text data are used to exploit the correspondences between audio and text by contrasting samples in a limited dataset setting. Experiment results on BELU, CIDEr, and SPIDEr showed the effectiveness of the proposed approach with a relative improvement of up to 3.37%, compared to the baseline. In future work, we will explore more contrastive representation on audio-text data with different architectures, such as Momentum Contrast (MoCo) [9] and SimCLR [3].

6 Acknowledgment

This work is partly supported by grant EP/T019751/1 from the Engineering and Physical Sciences Research Council (EPSRC), a Newton Institutional Links Award from the British Council, titled “Automated Captioning of Image and Audio for Visually and Hearing Impaired” (Grant number 623805725) and a Research Scholarship from the China Scholarship Council (CSC) No. 202006470010.

References

  • [1] P. Anderson, B. Fernando, M. Johnson, and S. Gould (2016) SPICE: semantic propositional image caption evaluation. In European conference on Computer Vision, pp. 382–398. Cited by: §3.5.
  • [2] K. Chen, Y. Wu, Z. Wang, X. Zhang, F. Nian, S. Li, and X. Shao (2020) Audio captioning based on transformer and pre-trained cnn. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020). Tokyo, Japan, pp. 21–25. Cited by: §1.
  • [3] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton (2020) A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning, pp. 1597–1607. Cited by: §1, §5.
  • [4] M. Cornia, M. Stefanini, L. Baraldi, and R. Cucchiara (2020) Meshed-memory transformer for image captioning. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 10578–10587. Cited by: §2.1.2.
  • [5] K. Drossos, S. Adavanne, and T. Virtanen (2017)

    Automated audio captioning with recurrent neural networks

    .
    In 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 374–378. Cited by: §1.
  • [6] K. Drossos, S. Lipping, and T. Virtanen (2020) Clotho: an audio captioning dataset. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 736–740. Cited by: §1, §3.1.
  • [7] A. Ö. Eren and M. Sert (2020) Audio captioning based on combined audio and semantic embeddings. In 2020 IEEE International Symposium on Multimedia (ISM), pp. 41–48. Cited by: §1.
  • [8] B. Gunel, J. Du, A. Conneau, and V. Stoyanov (2021) Supervised contrastive learning for pre-trained language model fine-tuning. In International Conference on Learning Representations, Cited by: §1.
  • [9] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2020) Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738. Cited by: §5.
  • [10] Q. Huang, T. Ko, H. L. Tang, X. Liu, and B. Wu (2021) Token-level supervised contrastive learning for punctuation restoration. arXiv preprint arXiv:2107.09099. Cited by: §1.
  • [11] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan (2020) Supervised contrastive learning. In Advances in Neural Information Processing Systems, Vol. 33, pp. 18661–18673. Cited by: §1.
  • [12] C. D. Kim, B. Kim, H. Lee, and G. Kim (2019) AudioCaps: generating captions for audios in the wild. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 119–132. Cited by: §1.
  • [13] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.3.
  • [14] Y. Koizumi, R. Masumura, K. Nishida, M. Yasuda, and S. Saito (2020)

    A transformer-based audio captioning model with keyword estimation

    .
    arXiv preprint arXiv:2007.00222. Cited by: §1.
  • [15] Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley (2020) Panns: large-scale pretrained audio neural networks for audio pattern recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, pp. 2880–2894. Cited by: §1, §2.1.1, §3.3.
  • [16] A. Lavie and A. Agarwal (2007) METEOR: an automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pp. 228–231. Cited by: §3.5.
  • [17] X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu, L. Dong, F. Wei, et al. (2020) Oscar: object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pp. 121–137. Cited by: §2.1.2.
  • [18] C. Lin (2004) ROUGE: a package for automatic evaluation of summaries. In Text Summarization Branches Out, pp. 74–81. Cited by: §3.5.
  • [19] S. Liu, Z. Zhu, N. Ye, S. Guadarrama, and K. Murphy (2017) Improved image captioning via policy gradient optimization of spider. In Proceedings of the IEEE international conference on Computer Vision, pp. 873–881. Cited by: §3.5.
  • [20] X. Mei, Q. Huang, X. Liu, G. Chen, J. Wu, Y. Wu, J. Zhao, S. Li, T. Ko, H. L. Tang, X. Shao, M. D. Plumbley, and W. Wang (2021-07) An encoder-decoder based audio captioning system with transfer and reinforcement learning for DCASE challenge 2021 task 6. Technical report DCASE2021 Challenge. Cited by: §1, §2.1, §3.4.
  • [21] T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013) Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, Y. Bengio and Y. LeCun (Eds.), External Links: Link Cited by: §2.1.2.
  • [22] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS’13, Red Hook, NY, USA, pp. 3111–3119. Cited by: §2.1.2.
  • [23] A. V. D. Oord, Y. Li, and O. Vinyals (2018) Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §1.
  • [24] K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311–318. Cited by: §3.5.
  • [25] D. S. Park, W. Chan, Y. Zhang, C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le (2019)

    Specaugment: a simple data augmentation method for automatic speech recognition

    .
    arXiv preprint arXiv:1904.08779. Cited by: §3.2.
  • [26] D. Takeuchi, Y. Koizumi, Y. Ohishi, N. Harada, and K. Kashino (2020) Effects of word-frequency based pre-and post-processings for audio captioning. arXiv preprint arXiv:2009.11436. Cited by: §1.
  • [27] A. Tran, K. Drossos, and T. Virtanen (2020) WaveTransformer: a novel architecture for audio captioning based on learning temporal and time-frequency information. arXiv preprint arXiv:2010.11098. Cited by: §1.
  • [28] R. Vedantam, C. Lawrence Zitnick, and D. Parikh (2015) CIDEr: consensus-based image description evaluation. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4566–4575. Cited by: §3.5.
  • [29] W. Yuan, Q. Han, D. Liu, X. Li, and Z. Yang (2021-07) The DCASE 2021 challenge task 6 system: automated audio captioning with weakly supervised pre-traing and word selection methods. Technical report DCASE2021 Challenge. Cited by: §1, §2.1.
  • [30] Y. Zhang, S. Sun, M. Galley, Y. Chen, C. Brockett, X. Gao, J. Gao, J. Liu, and B. Dolan (2019) DialoGPT: large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536. Cited by: §3.4.