Multimodal sparse representation learning and applications

11/19/2015 ∙ by Miriam Cha, et al. ∙ 0

Unsupervised methods have proven effective for discriminative tasks in a single-modality scenario. In this paper, we present a multimodal framework for learning sparse representations that can capture semantic correlation between modalities. The framework can model relationships at a higher level by forcing the shared sparse representation. In particular, we propose the use of joint dictionary learning technique for sparse coding and formulate the joint representation for concision, cross-modal representations (in case of a missing modality), and union of the cross-modal representations. Given the accelerated growth of multimodal data posted on the Web such as YouTube, Wikipedia, and Twitter, learning good multimodal features is becoming increasingly important. We show that the shared representations enabled by our framework substantially improve the classification performance under both unimodal and multimodal settings. We further show how deep architectures built on the proposed framework are effective for the case of highly nonlinear correlations between modalities. The effectiveness of our approach is demonstrated experimentally in image denoising, multimedia event detection and retrieval on the TRECVID dataset (audio-video), category classification on the Wikipedia dataset (image-text), and sentiment classification on PhotoTweet (image-text).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Human perception works by integrating multiple sensory inputs. Processing different sensory modalities (e.g., vision, auditory, olfaction) and correlating them improve our perceptual abilities in numerous ways (Stein et al., 2009). When different phenomena cause an ambiguity by activating similar features in one modality, features from other modalities can be examined. If one modality is impaired or becomes corrupted, other modalities can help fill in the missing information for robustness. Finally, consensus among modalities can be taken as a reinforcing factor.

Multiple modalities are believed to benefit discriminative machine learning tasks. Using different sensors simultaneously, a scene from the same event can be described in multiple data modalities. For example, consider a multimedia event detection (MED) problem with class names such as “Dog show,” “Firework,” “Playing fetch with dogs,” and “Shooting a gun.” Judging based on the video modality only, “Dog show” and “Playing fetch with dogs” come close, featuring both people and dogs in some coordinated actions. However, the two classes are easier to discriminate by incorporating the audio modality in which “Dog show” is characterized by crowd noise and microphone announcement absent in “Playing fetch with dog.” On the other hand, “Firework” and “Shooting a gun” are hard to discriminate with audio, but their visual differences are useful.

In multimodal feature learning, one wishes to learn good shared representations across heterogeneous data modalities. We can form a union of unimodal features after learning features for each modality separately. This approach, however, has a drawback for being unable to learn patterns that occur jointly or selectively across modalities since unimodal learning emphasizes on relating information within one modality.

In this paper, we present a sparse coding framework that can model the correlation between modalities. The framework aims to learn relationships at higher level by forcing to share sparse representation. Sparse coding has been recognized widely in machine learning applications such as classification, denoising, and recognition (Wright et al., 2010)

. In particular, it is known that multimedia data can be well-represented as a sparse linear combination of basis vectors. For example, the abundance of unlabeled, photos in the web makes good large dictionaries readily available for sparse coding.

Similar to other learning methods, sparse coding has been primarily applied under unimodal settings. However, we recognize numerous multimodal approaches for sparse coding from recent literature. These approaches in common aim to learn shared sparse representation for different modalities. Jia et al. (2010) exploit structured sparsity to learn a shared latent space of multi-view data (e.g., 2D image + depth). Monaci et al. (2009) propose a sparse coding-based scheme that learns bimodal structure in audio-visual speech data. Additionally, Zhuang et al. (2013) describe a supervised sparse coding scheme for cross-modal retrieval (e.g., text retrieval from an image query). Our contributions are two-fold. First, we set up an experimental deep architecture built on multiple layers of sparse coding and pooling units. From this, we report promising results on classification with multimodal datasets. Secondly, we demonstrate the performance of multimodal sparse coding in a comprehensive set of applications. In particular, we include our result with TRECVID MED tasks for detecting high-level complex events in user-generated videos. We examine various settings of multimodal sparse coding (detailed in Section 3) using several multimodal datasets of semantically correlated pairs (audio/video and web images/text). Such semantic correlation reveals shared statistical association between modalities and thus can provide complementary information for each other.

There are existing multimodal learning schemes that are not sparse coding-based. These include audio-visual speech recognition (Gurban & Thiran, 2009; Papandreou et al., 2007; Lucey & Sridharan, 2006; Ngiam et al., 2011), sentiment recognition (Morency et al., 2011; Baecchi et al., 2015; Borth et al., 2013), and image-text retrieval (Sohn et al., 2014; Feng et al., 2014). Ngiam et al. (2011)

have applied deep stacked autoencoder to learn representations for speech audio coupled with video of the lips.

Poirson & Idrees (2013)

have used denoising autoencoder for Flickr photos with associated text.

In the following sections, we cover the basic principle of sparse coding and extend it to build our multimodal framework on both shallow and deep architectures. We will demonstrate the effectiveness of our approach experimentally with multimedia applications that include image denoising, categorical classification with images and text from Wikipedia, sentiment classification using PhotoTweet, and TRECVID MED.

2 Preliminaries

Originated to explain neuronal activations that encode sensory information

(Olshausen & Field, 1997), sparse coding is an unsupervised method to learn an efficient representation of data using a small number of basis vectors. It has been used to discover higher-level features for data from unlabeled examples. Given a data input , sparse coding solves for a representation while simultaneously updating the dictionary of basis vectors as

(1)

where is th dictionary atom in , and is a regularization parameter that penalizes over the -norm, which induces a sparse solution. With , sparse coding typically trains an overcomplete dictionary. This makes the sparse code higher in dimension than , but only elements in are nonzero.

Sparse coding can alternatively regularize on the pseudo-norm. Finding the sparsest solution in general, however, is known to be intractable. Although greedy- methods such as orthogonal matching pursuit (OMP) can be used, we only consider Equation (1) as our choice for sparse coding throughout this paper. We use least angle regression (LARS) and the dictionary learning algorithm by Mairal et al. (2009) from SPAMS toolbox (INRIA, ).

(a) Sparse coding of
(b) Sparse coding of
(c) Feature union
Figure 1: Unimodal sparse coding and feature union
(a) Joint sparse coding
(b) Cross-modal by
(c) Cross-modal by
(d) Feature union
Figure 2: Multimodal sparse coding and feature union

3 Multimodal Feature Learning via Sparse Coding

This section describes our multimodal feature learning schemes for sparse coding. Our schemes are general and can readily be extended for more than two modalities. For clarity of explanation, we use two modalities and throughout the section.

3.1 Parallel unimodal sparse coding

A straightforward approach for sparse coding with two heterogeneous modalities (e.g., text and images) is to learn a separate dictionary of basis vectors for each modality. Figure 1 depicts unimodal sparse coding schemes for modalities and . We learn the two dictionaries and in parallel

(2)
(3)

Unimodal sparse coding of takes in unlabeled examples to train while simultaneously computing corresponding sparse code under the regularization parameter . (We denote the th training example.) Similarly for modality , we train from unlabeled examples by computing . As explained in Figure 1(c), we can form , a union of the unimodal feature vectors.

3.2 Joint multimodal sparse coding

Union of the unimodal sparse codes is a simple way to encapsulate the features from both modalities. However, the unimodal training model is flawed since it cannot capture correlations between the two modalities that could be beneficial for inference tasks. To remedy the lack of joint learning, we propose a multimodal sparse coding scheme illustrated in Figure 2(a)

. We use the joint sparse coding technique used in image super-resolution

(Yang et al., 2010)

(4)

Here, we train with concatenated input vectors , where and are dimensions of and , respectively. As an interesting property, we can decompose the multimodal dictionary as to perform the cross-modal sparse coding

(5)
(6)

In principle, joint sparse coding in Equation (4) combines the objectives of Equations (5) and (6), forcing the sparse codes and to share the same values when . Ideally, we could have , although empirical values determined by the three different optimizations would differ in reality. According to Yang et al. (2010), and are highly correlated as the low and high resolution images are originated from the same source. However, if and come from two different modalities, their correlation is present in semantics. Thus, equality assumption among , , and is even less likely to be met. For that reason, we introduce cross-modal sparse coding that captures weak correlation between heterogeneous modalities, resulting sparse code that is more discriminative. Cross-modal sparse coding first trains a joint dictionary . Then in test time, cross-modal sparse codes are computed using sub-dictionaries and . Various feature formation possibilities on multimodal sparse coding—joint, cross-modal, and union of cross-modal sparse codes are explained in Figure 2.

3.3 Deep multimodal sparse coding

So far, we have only considered shallow learning architectures using a single layer of sparse coding and dictionary learning. The shallow architecture is capable of learning the features jointly, but it may not be sufficient to capture the complex semantic correlations between the modalities fully. We expect modalities with high semantic correlation to be stable in hierarchical architecture as it can utilize a large number of hidden layers and parameters to extract more meaningful high-level representation from modalities. Composing higher-level representation using low-level features should be advantageous to contextual data such as human language, speech, audio, and sequence of image patterns. Hierarchical composition of sparse codes has shown to help unveil separability of data invisible from their lower-level features in unimodal settings (Bo et al., 2013; Yu et al., 2011). Therefore, we consider deep architectures for multimodal sparse coding.

In Figure 3, we propose two possible architectures. Ngiam et al. (2011)

report their RBM-based approaches beneficial when applying deep learning on each modality before joint training. Adopting their configuration, we use (at least) two layers of sparse coding for each modality followed by the joint sparse coding as illustrated in Figure 

3(a). We write two-layer sparse coding for modality

(7)
(8)
(9)
(10)

We denote and the dictionaries learned by the two sparse-coding layers for , unpooled sparse codes and , and the hidden activations and

by max (or average) pooling sparse codes. Max pooling factors

and refer to the number of sparse codes aggregated to one pooled representation. They are determined empirically. Since sparse coding takes in dense input vectors and produces sparse output vectors, it is a poor fit for multilayering. Hence, we interlace nonlinear pooling units between sparse coding layers and aggregate multiple sparse vectors nonlinearly to a dense vector before passing onto next layer. Similar to modality , we can work out , , , , , and for modality . Ultimately, the joint feature representation in Figure 3(a) is learned by

(11)

where we input the concatenated deep hidden representations

to train .

Moreover, we consider additional joint sparse coding layer as illustrated in Figure 3(b). Again, we form dense hidden activations by pooling multiple vectors. We compute the deeper representation by

(12)
(a) Deep unimodal sparse coding before joint sparse coding
(b) Deep unimodal and deep joint sparse coding
Figure 3: Deep learning approaches for multimodal sparse coding

4 Experiments

4.1 Image denoising

In our first experiment, we evaluate the image denoising problem, where zero-mean Gaussian noise is to be removed from a given image. The proposed learning algorithm is used to jointly learn associations between clean and noisy images. We consider clean and its noisy counterpart as two input modalities and recover a clean image from a noisy one. We randomly select 2,500 images from CIFAR-10 (Krizhevsky, 2009)

and add zero-mean Gaussian noise with a range of standard deviation

to generate noisy images. We use 2,000 pairs of clean and noisy images for training joint dictionary and 500 pairs for testing. To test, we input a noisy image and compute cross-modal (noisy to clean) sparse code to recover a clean estimate. We compare the performance of the joint multimodal sparse coding with denoising autoencoder (DAE)

(Vincent et al., 2008). Table 1 summarizes denoising results for the joint sparse coding and DAE. In this experiment, the dictionaries used were of size , designed to handle clean and noisy image patches of size 4 4 pixels (, ). Every result reported is an average over 5 experiments. For different realization of the noise, joint sparse coding improves the quality of noisy images, and DAE often achieves comparable results.

In Figure 4, we visualize the generated clean images from noisy images with . Notice that joint sparse coding can learn shared association between clean and noisy images as in DAE to achieve comparable results.

Figure 4: Example of the denoising results with . From top to bottom, we visualize clean original image, noisy image, denoised images using DAE and joint sparse coding.
/PSNR DAE Joint sparse coding
0.001/30.08 28.87 31.08
0.005/23.20 25.61 25.71
0.01/20.28 24.14 23.87
0.1/11.52 18.74 18.19
Table 1: Peak signal-to-noise ratio results (in decibels) on image denoising using denoising autoencoder and joint sparse coding with a range of standard deviation .

4.2 Audio and video

In this section, we apply our multimodal sparse coding schemes to multimedia event detection (MED) tasks. MED aims to identify complex activities encompassing various human actions, objects, and their interactions at different places and time. MED is considered more difficult than concept analysis such as action recognition and has received significant attention in computer vision and machine learning research.

Dataset, tasks, and metrics. We use the TRECVID 2014 dataset (NIST, ) to evaluate our schemes. We consider the event detection and retrieval tasks using the 10Ex and 100Ex data scenarios, where 10Ex gives 10 multimedia examples per event, and 100 examples per event for 100Ex. There are 20 event classes (E021 to E040) with event names such as “Bike trick,” “Dog show,” and “Marriage proposal.” We evaluate the MED peformance trained on the features learned from our unimodal and multimodal sparse coding schemes. In particular, we compute classification accuracy and mean average precision (mAP) metrics according to the NIST standard on the following experiments: 1) cross-validation on 10Ex; and 2) train using 10Ex and test with 100Ex.

(a) Keyframe extraction
(b) Audio
(c) Video
Figure 5: TRECVID audio-video data processing

Data processing.

For processing efficiency and scalability, we use keyframe-based feature extraction for audio-video data. We apply a simple two-pass algorithm that computes color histogram difference of any two successive frames and determines a keyframe candidate based on the threshold calculated on the mean and standard deviation of the histogram differences. We examine the number of different colors present in the keyframe candidates and discard the ones with less than 26 colors (

e.g., all-black or all-white) to ensure non-blank keyframes.

Around each keyframe, we extract 5-sec audio data and additional 10 uniformly sampled video frames within the duration as illustrated in Figure 5(a)

. If extracted audio is stereo, we take only its left channel. The audio waveform is resampled to 22.05 kHz and regularized by the time-frequency automatic gain control (TF-AGC) to balance the energy in sub-bands. We form audio frames using a 46-msec Hann window with 50% overlap between successive frames for smoothing. For each frame, we compute 16 Mel-frequency cepstral coefficients (MFCCs) as the low-level audio feature. In addition, we append 16 delta cepstral and 16 delta-delta cepstral coefficients, which make our low-level audio feature vectors 48 dimensional. We apply PCA whitening before unsupervised learning. The complete audio preprocessing steps are described in Figure 

5(b).

For video preprocessing, we have tried out pretrained convolutional neural network (CNN) models and ended up choosing

VGG_ILSVRC_19_layers by University of Oxford’s Visual Geometry Group (VGG) (Simonyan & Zisserman, 2014)

for the ImageNet Large-scale Visual Recognition Challenge (ILSVRC). As depicted in Figure 

5(c), we run the CNN feedforward passes with the extracted video frames and take 4,096-dimensional hidden activation from

, the highest hidden layer before the final ReLU. By PCA whitening, we reduce the dimensionality to 128.

Feature learning for MED. We build feature vectors by sparse coding the preprocessed audio and video data. We use the number of basis vectors

same for all dictionaries under unimodal and multimodal sparse coding schemes. We aggregate sparse codes around each keyframe of a training example by max pooling to form feature vectors for classification. (The pooled feature vectors can scale to file level.) We train linear, 1-vs-all SVM classifiers for each event whose hyper-parameters are determined by 5-fold cross-validation on 10Ex.

Other feature learning schemes for comparison.

We consider other unsupervised methods to learn audio-video features for comparison. We report the results for Gaussian mixture model (GMM) and restricted Boltzmann machine (RBM) under similar unimodal and multimodal settings. For GMM, we use the expectation-maximization (EM) to fit the preprocessed input vectors audio-only, video-only, concatenated audio-video in 512 mixtures and form GMM supervectors

(Campbell et al., 2006)

containing posterior probabilities with respect to each Gaussian. The max-pooled GMM supervectors are used to train SVMs. We adopt the shallow bimodal pretraining model by

Ngiam et al. (2011) for RBM. For fairness, we use the hidden layer of a size 512, and the max-pooled activations are used to train SVMs. A target sparsity of 0.1 is applied to both GMM and RBM.

Results. Table 2 presents the classification accuracy and mAP performance of unimodal and multimodal sparse coding schemes. For the 10Ex/100Ex experiment, we have used the best parameter setting from the 10Ex cross-validation to test 100Ex examples. In general, we observe that the union of unimodal audio and video feature vectors perform better than using only unimodal or cross-modal features. The multimodal union scheme performs better than the joint schemes. The union schemes, however, double feature dimensionality since our union operation concatenates the two feature vectors. Joint feature vector is an economical way of combining both the audio and video features while keeping the same dimension as audio-only or video-only.

In Table 3, we report the mean accuracy and mAP for GMM and RBM under the union and joint feature learning schemes on the 10Ex/100Ex experiment. Our results show that sparse coding is better than GMM by 5–6% in accuracy and 7–8% in mAP. However, we find that the performance of RBM is on par with sparse coding. This leaves us a good next step to develop joint feature learning scheme for RBM.

Unimodal Multimodal
Audio-only Video-only Union Audio Video Joint Union
(Fig. 1(a)) (Fig. 1(b)) (Fig. 1(c)) (Fig. 2(b)) (Fig. 2(c)) (Fig. 2(a)) (Fig. 2(d))
Mean accuracy
(cross-val. 10Ex)
69% 86% 89% 75% 87% 90% 91%
mAP
(cross-val. 10Ex)
20.0% 28.1% 34.8% 27.4% 33.1% 35.3% 37.9%
Mean accuracy
(10Ex/100Ex)
56% 64% 71% 58% 67% 71% 74%
mAP
(10Ex/100Ex)
17.3% 28.9% 30.5% 23.6% 28.0% 28.4% 33.2%
Table 2: Mean accuracy and mAP performance of sparse coding schemes
Feature learning schemes Mean accuracy mAP
Union of unimodal GMM
features (Figure 1(c))
66% 23.5%
Multimodal joint GMM
feature (Figure 2(a))
68% 25.2%
Union of unimodal RBM
features (Figure 1(c))
70% 30.1%
Multimodal joint RBM
feature (Figure 2(a))
72% 31.3%
Table 3: Mean accuracy and mAP performance for GMM and RBM on 10Ex/100Ex

4.3 Images and text

In the third set of experiments, we consider learning shared associations between images and text for classification. We evaluate our methods on image-text datasets: Wikipedia (Rasiwasia et al., 2010) and PhotoTweet (Borth et al., 2013).

4.3.1 Wikipedia

Each article in Wikipedia dataset contains paragraphs of text on a subject with a corresponding picture. These text-image pairs are annotated with a label from 10 categories: art, biology, geography, history, literature, media, music, royalty, sport, and warfare. The corpus contains a total of 2,866 documents. We have split the documents into five folds. Along with the dataset, the authors have supplied the features for images and text. Each image is represented by a histogram of a 128-codeword SIFT (Lowe, 2004) features, and each text is represented by a histogram of a 10-topic latent Dirichlet allocation (LDA) model (Blei et al., 2003). We use the SIFT features for images and LDA features for text as input to sparse coding algorithms, and then train the 1-vs-all SVMs. After that, we predict the labels of the test image-text pairs and report the accuracy. In this experiment, we have used .

Table 4 reports the results for various feature representations. In comparison between unimodal sparse coding of images or text alone, text features outperform the image features. In Wikipedia, category membership is mostly driven by text. Categorization solely based on image is ambiguous and difficult even for a human. Thus, it is expected that image features would have lower accuracy than text features. The union of unimodal image and text features (Table  4c) further improves the accuracy. Joint sparse coding (Table 4d) is able to learn multimodal features that go beyond simply concatenating the two unimodal features.

We notice that learning shared association between image and text can improve classification accuracy even when only a single modality is available for training and testing. Cross-modal by images (Table 4e) improves accuracy by 4.35% compared to unimodal sparse coding of images (Table 4a), and cross-modal by text (Table 4f) achieves 4.54% higher accuracy than unimodal sparse coding of text (Table 4b). When the cross-modal features by images and text are concatenated, it outperforms the other feature combinations.

For all feature representations, multimodal features significantly outperform the unimodal features. This shows that the learned shared association between multiple modalities are useful for both cases when a single or multiple modalities are available for training and testing.

Figure 6 compares classification accuracies of joint sparse coding (Table 4d), cross-modal by text (Table 4f), and multimodal feature union (Table 4g) in 10 categories. Although multimodal feature union does not achieve the best accuracy value for several categories, the classification accuracy is almost close to the best one.

Feature Representation Accuracy
(a) Sparse coding of images
(Figure 1a)
16.93%
(b) Sparse coding of text
(Figure 1b)
61.89%
(c) Unimodal feature union
(Figure 1c)
63.38%
(d) Joint sparse coding
(Figure 2a)
66.44%
(e) Cross-modal by images
(Figure 2b)
21.28%
(f) Cross-modal by text
(Figure 2c)
66.43%
(g) Multimodal feature union
(Figure 2d)
67.23%
Table 4: Classification performance for image-text classification on Wikipedia dataset.
Figure 6: Category classification comparison of joint sparse coding, cross-modal by text, and multimodal feature union on Wikipedia dataset

4.3.2 PhotoTweet

PhotoTweet dataset includes 603 Twitter messages (tweets) with their associated photos. The benchmark is designed for Twitter user’s sentiment prediction using visual and textual features. PhotoTweet is collected in November 2012 via the PeopleBrowsr API. Ground truths labels are obtained by Amazon Mechanic Turk annotation, resulting in 470 positive and 133 negative sentiments. The authors of the dataset have partitioned the dataset into five folds for cross-validation.

We represent the textual data using binary bag-of-words embedding, producing 2,688 dimensions vector. We process the embedded tweet data to consecutive patches of a configurable size, . The dimension of resulting patches are reduced to 12 dimensions with PCA whitening. For images, we consider a per-image feature vector from non-overlapping patches drawn from a receptive field with width pixels. Thus, each (colored) patch has size . In unsupervised learning, we precondition visual and textual patches with mean removal and whitening before sparse coding. We have used a dictionary size for both unimodal and multimodal settings. For max pooling, we use pooling factors in 10s.

In Table 5, we present the sentiment classification performances using linear 1-vs-all SVM. We compare performances of sparse coding methods discussed in Section 3. On Twitter sentiment classification task, image and text features alone are equally useful. Thus, the image features complement the text features in unimodal feature union achieving . Shared representation learned from unsupervised learning stage helps increase classification performance by 4–6% in cross-modal features compared to unimodal features. The best feature in shallow model is multimodal feature union (Table 5g). Finally, we further improve the performance by building deep representations that model the correlation across the learned shallow representations. After having tried both architectures in Figures 3a and b, we obtained the result summarized the better of two in Table 5.

Table 6 presents a summary that compares the sentiment classification performances of our approach and the previous work. The authors of the PhotoTweet dataset use a combination of SentiStrength (Thelwall et al., 2010) for textual features and SentiBank (Borth et al., 2013) for mid-level visual features. The combined method simply concatenates the features from SentiStrength and SentiBank and does not learn shared association between modalities. We notice that our multimodal feature union (Table 5g) outperforms SentiStrength+SentiBank, emphasizing the importance of shared learning across multiple modalities. Baecchi et al. (2015) use an extension of Continuous bag-of-words model for text and denoising autoencoder for images. Again, the textual and visual features are concatenated. We compare this method with hierarchical learning with our deep multimodal sparse coding (Table 5h) and show that our method yields better classification result.

Feature Representation Accuracy
(a) Sparse coding of images
(Figure 1a)
60.91%
(b) Sparse coding of text
(Figure 1b)
58.07%
(c) Unimodal feature union
(Figure 1c)
66.10%
(d) Joint sparse coding
(Figure 2a)
70.29%
(e) Cross-modal by images
(Figure 2b)
66.64%
(f) Cross-modal by text
(Figure 2c)
62.01%
(g) Multimodal feature union
(Figure 2d)
71.95%
(h) Deep multimodal sparse coding
75.16%
Table 5: Classification performance for image-text classification on PhotoTweet dataset.
Feature Representation Accuracy
Linear Logistic
SVM Regr.
SentiStrength + SentiBank (Borth et al., 2013) 68% 72%
Shallow multimodal sparse coding 71.95% 74.65%
CBOW-DA-LR (Baecchi et al., 2015) N/A 79%
Deep multimodal sparse coding 75.16% 80.70%
Table 6: Classification performance comparison between multimodal sparse coding and other existing work on PhotoTweet dataset.

5 Conclusion

We have presented multimodal sparse coding algorithms that model the semantic correlation between modalities. We have shown that multimodal features significantly outperform the unimodal features. Our experimental results also indicate that the multimodal features learned by our algorithms is more discriminative than the feature formed by concatenating multiple unimodal features. In addition, cross-modal features computed using only one modality also lead to better performance than unimodal features. This suggests we can learn better features for one modality from joint learning of multiple modalities. The effectiveness of our approach is demonstrated in various multimedia applications such as image denoising, MED, category and sentiment classification.

References

  • Baecchi et al. (2015) Baecchi, C., Uricchio, T., Bertini, M., and Bimbo, A.D.

    A multimodal feature learning approach for sentiment analysis of social network multimedia.

    In Multimed Tools Appl, 2015.
  • Blei et al. (2003) Blei, D., Ng, A., and Jordan, M. Latent Dirichlet allocation. In JMLR, 2003.
  • Bo et al. (2013) Bo, L., Ren, X., and Fox, D. Multipath Sparse Coding Using Hierarchical Matching Pursuit. In CVPR, 2013.
  • Borth et al. (2013) Borth, D., Ji, R., Chen, T., Breuel, T., and Chang, S. Large-scale Visual Sentiment Ontology and Detectors Using Adjective Noun Pairs. ACM Multimedia Conference, 2013.
  • Campbell et al. (2006) Campbell, W. M., Sturim, D. E., and Reynolds, D. A. Support Vector Machines Using GMM Supervectors for Speaker Verification. IEEE Signal Processing Letters, 13(5):308–311, May 2006.
  • Feng et al. (2014) Feng, F., Wang, X., and Li, R. Cross-modal Retrieval with Correspondence Autoencoder. In ACM Multimedia Conference, 2014.
  • Gurban & Thiran (2009) Gurban, M. and Thiran, J. Information Theoretic Feature Extraction for Audio-Visual Speech Recognition. In IEEE Transactions on Signal Processing, 2009.
  • (8) INRIA. Sparse modeling software. http://spams-devel.gforge.inria.fr/.
  • Jia et al. (2010) Jia, Y., Salzmann, M., and Trevor, D. Factorized Latent Spaces with Structured Sparsity. In NIPS, 2010.
  • Krizhevsky (2009) Krizhevsky, A. Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto, 2009.
  • Lowe (2004) Lowe, D. Distinctive image features from scale-invariant keypoints. In IJCV, 2004.
  • Lucey & Sridharan (2006) Lucey, P. and Sridharan, S. Patch-based Representation of Visual Speech. In ACM VisHCI, 2006.
  • Mairal et al. (2009) Mairal, J., Bach, F., Ponce, J., and Sapiro, G. Online Dictionary Learning for Sparse Coding. In ICML, 2009.
  • Monaci et al. (2009) Monaci, G., Vandergheynst, P., and Sommer, F. Learning Bimodal Structure in Audio-Visual Data. In IEEE Trans. on Neural Networks, 2009.
  • Morency et al. (2011) Morency, L., Mihalcea, R., and Doshi, P. Towards Multimodal Sentiment Analysis: Harvesting Opinions from the Web. In ACM ICMI, 2011.
  • Ngiam et al. (2011) Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., and Ng, A. Multimodal deep learning. In ICML, 2011.
  • (17) NIST. 2014 TRECVID Multimedia Event Detection & Multimedia Event Recounting Tracks. http://nist.gov/itl/iad/mig/med14.cfm.
  • Olshausen & Field (1997) Olshausen, B. and Field, D. Sparse Coding with an Overcomplete Basis Set: Strategy Employed by V1? Vision research, 1997.
  • Papandreou et al. (2007) Papandreou, G., Katsamanis, A., Pitsikalis, V., and Maragos, P. Multimodal Fusion and Learning with Uncertain Features Applied to Audiovisual Speech Recognition. In IEEE Workshop on Multimedia Signal Processing, 2007.
  • Poirson & Idrees (2013) Poirson, P. and Idrees, H. Multimodal stacked denoising autoencoders. Technical Report, 2013.
  • Rasiwasia et al. (2010) Rasiwasia, N., Pereira, J., Coviello, E., and Doyle, G. A New Approach to Cross-Modal Multimedia Retrieval. In ACM Multimedia Conference, 2010.
  • Simonyan & Zisserman (2014) Simonyan, K. and Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR, abs/1409.1556, 2014.
  • Sohn et al. (2014) Sohn, K., Shang, W., and Lee, H. Improved Multimodal Deep Learning with Variation of Information. In NIPS, 2014.
  • Stein et al. (2009) Stein, B., Stanford, T., and Rowland, B. The Neural Basis of Multisensory Integration in the Midbrain: Its Organization and Maturation. Hearing research, 258(1):4–15, 2009.
  • Thelwall et al. (2010) Thelwall, M., Buckley, K., Paltoglou, G., Cai, D., and Kappas, A. Sentiment strength detection in short informal text. Journal of the American Society for Information Science and Technology, 2010.
  • Vincent et al. (2008) Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P. Extracting and composing robust features with denoising autoencoders. In ICML, 2008.
  • Wright et al. (2010) Wright, J., Ma, Y., Maral, J., Sairo, G., Huang, T., and Yan, S.

    Sparse representation for computer vision and pattern recognition.

    In Proceedings of the IEEE, 2010.
  • Yang et al. (2010) Yang, J., Wright, J., Huang, T., and Ma, Y. Image Super-Resolution Via Sparse Representation. IEEE Trans. on Image Processing, 19(11):2861–2873, 2010.
  • Yu et al. (2011) Yu, K., Lin, Y., and Lafferty, J. Learning Image Representations from the Pixel Level via Hierarchical Sparse Coding. In CVPR, 2011.
  • Zhuang et al. (2013) Zhuang, Y., Wang, Y., Wu, F., Zhang, Y., and Lu, W. Supervised Coupled Dictionary Learning with Group Structures for Multi-Modal Retrieval. In AAAI, 2013.