Videos have become the next frontier in artificial intelligence. The rich semantics contained in them make them a challenging data type posing several challenges in both perceptual, reasoning or even computational level. Mimicking the learning process and knowledge extraction that humans develop from our visual and audio perception remains an open research question, and video contain all this information in a format manageable for science and research.
Videos are used in this work for two main reasons. Firstly, they naturally integrate both visual and audio data, providing a weak labeling of one modality with respect to the other. Secondly, the high volume of both visual and audio data allows training machine learning algorithms whose models are governed by a high amount of parameters. The huge scale video archives available online and the increasing number of video cameras that constantly monitor our world, offer more data than computation power available to process them.
The popularization of deep neural networks among the computer vision and audio communities has defined a common framework boosting multimodal research. Tasks like video sonorization, speaker impersonation or self-supervised feature learning have exploited the opportunities offered by artificial neurons to project images, text and audio in a feature space where bridges across modalities can be built.
This work exploits the relation between the visual and audio contents in a video clip to learn a joint embedding space with deep neural networks. Two multilayer perceptrons (MLPs), one for visual features and a second one for audio features, are trained to be mapped into the same cross-modal representation. We adopt a self-supervised approach, as we exploit the unsupervised correspondence between the audio and visual tracks in any video clip.
We propose a joint audiovisual space to address a retrieval task formulating a query from any of the two modalities. As depicted in Figure 1, whether a video or an audio clip can be used as a query to search its matching pair in a large collection of videos. For example, an animated GIF could be sonorized by finding an adequate audio track, or an audio recording illustrated with a related video.
In this paper, we present a simple yet very effective model for retrieving documents with a fast and light search. We do not address an exact alignment between the two modalities that would require a much higher computation effort.
The paper is structured as follows. Section 2 introduces the related work on learned audiovisual embeddings with neural networks. Section 3 presents the architecture of our model and Section 4 how it was trained. Experiments are reported in Section 5 and final conclusions drawn in Section 6. The source code and trained model used in this paper is publicly available from https://github.com/surisdi/youtube-8m.
2 Related work
In the past years, the relationship between the audio and the visual content in videos has been researched in several contexts. Overall, conventional approaches can be divided into four categories according to the task: generation, classification, matching and retrieval.
As online music streaming and video sharing websites have become increasingly popular, some research has been done on the relationship between music and album covers [1, 2, 3, 4] and also on music and videos (instead of just images) as the visual modality [5, 6, 7, 8] to explore the multimodal information present in both types of data.
A recent study  also explored the cross-modal relations between the two modalities but using images with people talking and speech. It is done through Canonical Correlation Analysis (CCA) and cross-modal factor analysis. Also applying CCA,  uses visual and sound features and common subspace features for aiding clustering in image-audio datasets. In a work presented by 
, the key idea was to use greedy layer-wise training with Restricted Boltzmann Machines (RBMs) between vision and sound.
The present work is focused on using the information present in each modality to create a joint embedding space to perform cross-modal retrieval. This idea has been exploited especially using text and image joint embeddings [12, 13, 14], but also between other kinds of data, for example creating a visual-semantic embedding  or using synchronous data to learn discriminative representations shared across vision, sound and text .
However, joint representations between the images (frames) of a video and its audio have yet to be fully exploited, being  the work that most has explored this option up to the knowledge of the authors. In their paper, they seek for a joint embedding space but only using music videos to obtain the closest and farthest video given a query video, only based on either image or audio.
The main idea of the current work is borrowed from , which is the baseline to understand our approach. There, the authors create a joint embedding space for recipes and their images. They can then use it to retrieve recipes from any food image, looking to the recipe that has the closest embedding. Apart from the retrieval results, they also perform other experiments, such as studying the localized unit activations, or doing arithmetics with the images.
In this section we present the architecture for our joint embedding model, which is depicted in the Figure 2.
As inputs, we have the vector of features representing the images of the video and the vector of features representing the audio. These features are already precomputed and provided in the YouTube-8M dataset. In particular, we use the video-level features, which represent the whole video clip with two vectors: one for the audio and another one for the video. These feature representations are the result of an average pooling of the local audio features computed over windows of one second, and local visual features computed over frames sampled at 1 Hz.
The main objective of the system is to transform the two different features (image and audio, separately) to other features laying in a joint space. This means that for the same video, ideally the image features and the audio features will be transformed to the same joint features, in the same space. We will call these new features embeddings, and will represent them with , for the image embeddings, and , for the audio embeddings.
The idea of the joint space is to represent the concept of the video, not just the image or the audio, but a generalization of it. As a consequence, videos with similar concepts will have closer embeddings and videos with different concepts will have embeddings further apart in the joint space. For example, the representation of a tennis match video will be close to the one of a football match, but not to the one of a maths lesson.
Thus, we use a set of fully connected layers of different sizes, stacked one after the other, going from the original features to the embeddings. The audio and the image network are completely separated. These fully connected layers perform a non-linear transformation on the input features, mapping them to the embeddings, being the parameters of this non-linear mapping learned in the optimization process.
After that, a classification from the two embeddings is done, also using a fully connected layer from them to the different classes, using a sigmoid as activation function. We will get more insight on this step in section4.
The number of hidden layers is not necessarily fixed, as well as the number of neurons per layer, since we experimented with different configurations. Each hidden layer uses ReLu as activation function, and all the weights in each layer are regularized using L2 norm.
In this section we present the used losses as well as their meaning and intuition.
4.1 Similarity Loss
The objective of this work is to get the two embeddings of the same video to be as close as possible (ideally, the same), while keeping embeddings from different videos as far as possible.
Formally, we are given a video , represented by the audio and visual features ( represents the image features and the audio features of ). The objective is to maximize the similarity between , the embedding obtained by transformations on , and , the embedding obtained by transformations on .
At the same time, however, we have to prevent embeddings from different videos to be “close” in the joint space. In other words, we want them to have low similarity. However, the objective is not to force them to be opposite to each other. Instead of forcing them to have similarity equal to zero, we allow a margin of similarity small enough to force the embeddings to be clearly not in the same place in in the joint space. We call this margin .
During the training, both positive and negative pairs are used, being the positive pairs the ones for which and correspond to the same video , and the negative pairs the ones for which and do not correspond to the same video, this is, . The proportion of negative samples is .
For the negative pairs, we selected random pairs that did not have any common label, in order to help the network to learn how to distinguish different videos in the embedding space. The notion of “similarity” or “closeness” is mathematically translated into a cosine similarity between the embeddings, being the cosine similarity defined as:
for any pair of real-valued vectors and .
From this reasoning we get to the first and most important loss:
where denotes positive sampling, and denotes negative sampling.
4.2 Classification Regularization
Inspired by the work presented in , we provide additional information to our system by incorporating the video labels (classes) provided by the YouTube-8M dataset. This information is added as a regularization term that seeks to solve the high-level classification problem, both from the audio and from the video embeddings, sharing the weights between the two branches. The key idea here is to have the classification weights from the embeddings to the labels shared by the two modalities.
This loss is optimized together with the previously explained similarity loss, serving as a regularization term. Basically, the system learns to classify the audio and the images of a video (separately) into different classes or labels provided by the dataset. We limit its effect by using a regularization parameter.
To incorporate the previously explained regularization to the joint embedding, we use a single fully connected layer, as shown in Figure 2
. Formally, we can obtain the label probabilities asand , where represents the learned weights, which are shared between the two branches. The softmax activation is used in order to obtain probabilities at the output. The objective is to make as similar as possible to , and as similar as possible to , where and are the category labels for the video represented by the image features and the audio features, respectively. For positive pairs, and are the same.
The loss function used for the classification is the well known cross entropy loss:
Thus, the classification loss is:
Finally, the loss function to be optimized is:
4.3 Parameters and Implementation Details
For our experiments we used the following parameters:
Batch size of .
We saw that starting with different than zero led to a bad embedding similarity because the classification accuracy was preferred. Thus, we began the training with and set it to at step number 10,000.
Percentage of negative samples = 0.6.
4 hidden layers in each network branch, the number of neurons per layer being, from features to embedding, 2000, 2000, 700, 700 in the image branch, and 450, 450, 200, 200 in the audio branch.
Dimensionality of the feature vector = 250.
We trained a single epoch.
The experiments presented in this section were developed over a subset of 6,000 video clips from the YouTube-8M dataset . This dataset does not contain the raw video files, but their representations as precomputed features, both from audio and video. Audio features were computed using the method explained in  over audio windows of 1 second, while visual features were computed over frames sampled at 1 Hz with the Inception model provided in TensorFlow .
The dataset provides video-level features, which represent all the video using a single vector (one for audio and another for visual information), and thus does not maintain temporal information; and also provides frame-level features, which consist on a single vector representing each second of audio, and a single vector representing each frame of the video, sampled at 1 frame per second.
The main goal of this dataset is to provide enough data to reach state of the art results in video classification. Nevertheless, such a huge dataset also permits approaching other tasks related to videos and cross-modal tasks, such as the one we approach in this paper. For this work, and as a baseline, we only use the video-level features.
5.2 Quantitative Performance Evaluation
We divide our results in two different categories: quantitative (numeric) results and qualitative results.
To obtain the quantitative results we use the Recall@k metric. We define Recall@k as the recall rate at top K for all the retrieval experiments, this is, the percentage of all the queries where the corresponding video is retrieved in the top K, hence higher is better.
The experiments are performed with different dimension of the feature vector. The Table 1 shows the results of recall from audio to video. In other words, from the audio embedding of a video, how many times we retrieve the embedding corresponding to the images of that same video. Table 2 shows the recall from video to audio.
To have a reference, the random guess result would be . The obtained results show a very clear correspondence between the embeddings coming from the audio features and the ones coming from the video features. It is also interesting to notice that the results from audio to video and from video to audio are very similar, because the system has been trained bidirectionally.
|Number of elements||Recall@1||Recall@5||Recall@10|
|Number of elements||Recall@1||Recall@5||Recall@10|
5.3 Qualitative Performance Evaluation
In addition to the objective results, we performed some insightful qualitative experiments. They consisted on generating the embeddings of both the audio and the video for a list of 6,000 different videos. Then, we randomly chose a video, and from its image embedding, we retrieved the video with the closest audio embedding, and the other way around (from one video’s audio we retrieved the video with the closest image embedding). If the closest embedding corresponded to the same video, we took the second one in the ordered list.
The Figure 3 shows some experiments. On the left, we can see the results given a video query and getting the closest audio; and on the right the input query is an audio. Examples depicting the real videos and audio are available online 222https://goo.gl/NAcJah. It shows both the results when going from image to audio, and when going from audio to image. Four different random examples are shown in each case. For each result and each query, we also show their YouTube-8M labels, for completeness.
The results show that when starting from the image features of a video, the retrieved audio represents a very accurate fit for those images. Subjectively, there are non negligible cases where the retrieved audio actually fits better the video than the original one, for example when the original video has some artificially introduced music, or in cases where there is some background commentator explaining the video in a foreign (unknown) language. This analysis can also be done similarly the other way around, this is, with the audio colorization
audio colorizationapproach, providing images for a given audio.
6 Conclusions and Future Work
We presented an effective method to retrieve audio samples that fit correctly to a given (muted) video. The qualitative results show that the already existing online videos, due to its variety, represent a very good source of audio for new videos, even in the case of only retrieving from a small subset of this large amount of videos. Due to the existing difficulty to create new audio from scratch, we believe that a retrieval approach is the path to follow in order to give audio to videos.
The range of possibilities to extend the presented work is excitingly broad. The first idea would be to make use of the YouTube-8M dataset variety and information. The temporal information provided by the individual image and audio features is not used in the current work. The most promising future work implies using this temporal information to match audio and images, making use of the implicit synchronization the audio and the images of a video have, without needing any supervised control. Thus, the next step in our research is introducing a recurrent neural network, which will allow us to create more accurate representations of the video, and also retrieve different audio samples for each image, creating a fully synchronized system.
Also, it would be very interesting to study the behavior of the system depending on the class of the input. Observing the dataset, it is clear that not all the classes have the same degree of correspondence between audio and image, as for example some videos have artificially (posterior) added music, which is not related at all to the images.
In short, we believe the YouTube-8M dataset allows for promising research in the future in the field of video sonorization and audio retrieval, for it having a huge amount of samples, and for it capturing multi-modal information in a highly compact way.
This work was partially supported by the Spanish Ministry of Economy and Competitivity and the European Regional Development Fund (ERDF) under contract TEC2016-75976-R. Amanda Duarte was funded by the mobility grant of the Severo Ochoa Program at Barcelona Supercomputing Center (BSC-CNS).
-  Eric Brochu, Nando De Freitas, and Kejie Bao, “The sound of an album cover: Probabilistic multimedia and information retrieval,” in Artificial Intelligence and Statistics (AISTATS), 2003.
“Analysing the similarity of album art with self-organising maps,”
International Workshop on Self-Organizing Maps. Springer, 2011, pp. 357–366.
-  Janis Libeks and Douglas Turnbull, “You can judge an artist by an album cover: Using images for music annotation,” IEEE MultiMedia, vol. 18, no. 4, pp. 30–37, 2011.
-  Jiansong Chao, Haofen Wang, Wenlei Zhou, Weinan Zhang, and Yong Yu, “Tunesensor: A semantic-driven music recommendation service for digital photo albums,” in 10th International Semantic Web Conference, 2011.
-  Alexander Schindler and Andreas Rauber, “An audio-visual approach to music genre classification through affective color features,” in European Conference on Information Retrieval. Springer, 2015, pp. 61–67.
-  Xixuan Wu, Yu Qiao, Xiaogang Wang, and Xiaoou Tang, “Bridging music and image via cross-modal ranking analysis,” IEEE Transactions on Multimedia, vol. 18, no. 7, pp. 1305–1318, 2016.
-  Esra Acar, Frank Hopfgartner, and Sahin Albayrak, “Understanding affective content of music videos through learned representations,” in International Conference on Multimedia Modeling. Springer, 2014, pp. 303–314.
-  Olivier Gillet, Slim Essid, and Gal Richard, “On the correlation of automatic audio and visual segmentations of music videos,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 3, pp. 347–355, 2007.
-  Dongge Li, Nevenka Dimitrova, Mingkun Li, and Ishwar K Sethi, “Multimedia content processing through cross-modal association,” in Proceedings of the eleventh ACM international conference on Multimedia. ACM, 2003, pp. 604–611.
-  Hong Zhang, Yueting Zhuang, and Fei Wu, “Cross-modal correlation learning for clustering on image-audio dataset,” in 15th ACM international conference on Multimedia. ACM, 2007, pp. 273–276.
Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y
“Multimodal deep learning,”in Proceedings of the 28th international conference on machine learning, 2011, pp. 689–696.
-  Liwei Wang, Yin Li, and Svetlana Lazebnik, “Learning deep structure-preserving image-text embeddings,” CoRR, vol. abs/1511.06078, 2015.
-  Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel, “Unifying visual-semantic embeddings with multimodal neural language models,” CoRR, vol. abs/1411.2539, 2014.
-  Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier Marin, Ferda Ofli, Ingmar Weber, and Antonio Torralba, “Learning cross-modal embeddings for cooking recipes and food images,” in CVPR, 2017.
-  Andrea Frome, Greg Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and Tomas Mikolov, “Devise: A deep visual-semantic embedding model,” in Neural Information Processing Systems, 2013.
-  Yusuf Aytar, Carl Vondrick, and Antonio Torralba, “See, hear, and read: Deep aligned representations,” arXiv preprint arXiv:1706.00932, 2017.
-  Sungeun Hong, Woobin Im, and Hyun S. Yang, “Deep learning for content-based, cross-modal retrieval of videos and music,” CoRR, vol. abs/1704.06761, 2017.
-  Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan, “Youtube-8m: A large-scale video classification benchmark,” CoRR, vol. abs/1609.08675, 2016.
-  Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
-  S. Hershey, S. Chaudhuri, D. P. W. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, M. Plakal, D. Platt, R. A. Saurous, B. Seybold, M. Slaney, R. J. Weiss, and K. Wilson, “Cnn architectures for large-scale audio classification,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2017, pp. 131–135.