Multiview Representation Learning for a Union of Subspaces

12/30/2019
by   Nils Holzenberger, et al.
0

Canonical correlation analysis (CCA) is a popular technique for learning representations that are maximally correlated across multiple views in data. In this paper, we extend the CCA based framework for learning a multiview mixture model. We show that the proposed model and a set of simple heuristics yield improvements over standard CCA, as measured in terms of performance on downstream tasks. Our experimental results show that our correlation-based objective meaningfully generalizes the CCA objective to a mixture of CCA models.

READ FULL TEXT
research
02/08/2017

Deep Generalized Canonical Correlation Analysis

We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a ...
research
06/15/2017

A Mixture Model for Learning Multi-Sense Word Embeddings

Word embeddings are now a standard technique for inducing meaning repres...
research
07/11/2020

Driver Behavior Modelling at the Urban Intersection via Canonical Correlation Analysis

The urban intersection is a typically dynamic and complex scenario for i...
research
06/14/2021

Latent Correlation-Based Multiview Learning and Self-Supervision: A Unifying Perspective

Multiple views of data, both naturally acquired (e.g., image and audio) ...
research
10/30/2020

Multiview Variational Graph Autoencoders for Canonical Correlation Analysis

We present a novel multiview canonical correlation analysis model based ...
research
06/27/2012

Copula Mixture Model for Dependency-seeking Clustering

We introduce a copula mixture model to perform dependency-seeking cluste...
research
11/19/2015

An Information Retrieval Approach to Finding Dependent Subspaces of Multiple Views

Finding relationships between multiple views of data is essential both f...

Please sign up or login with your details

Forgot password? Click here to reset