Deep Transformation-Invariant Clustering

06/19/2020
by   Tom Monnier, et al.
0

Recent advances in image clustering typically focus on learning better deep representations. In contrast, we present an orthogonal approach that does not rely on abstract features but instead learns to predict image transformations and performs clustering directly in image space. This learning process naturally fits in the gradient-based training of K-means and Gaussian mixture model, without requiring any additional loss or hyper-parameters. It leads us to two new deep transformation-invariant clustering frameworks, which jointly learn prototypes and transformations. More specifically, we use deep learning modules that enable us to resolve invariance to spatial, color and morphological transformations. Our approach is conceptually simple and comes with several advantages, including the possibility to easily adapt the desired invariance to the task and a strong interpretability of both cluster centers and assignments to clusters. We demonstrate that our novel approach yields competitive and highly promising results on standard image clustering benchmarks. Finally, we showcase its robustness and the advantages of its improved interpretability by visualizing clustering results over real photograph collections.

READ FULL TEXT

page 2

page 7

page 9

page 13

research
12/05/2019

Multi-Modal Deep Clustering: Unsupervised Partitioning of Images

The clustering of unlabeled raw images is a daunting task, which has rec...
research
05/10/2020

Variational Clustering: Leveraging Variational Autoencoders for Image Clustering

Recent advances in deep learning have shown their ability to learn stron...
research
06/30/2020

Is Robustness To Transformations Driven by Invariant Neural Representations?

Deep Convolutional Neural Networks (DCNNs) have demonstrated impressive ...
research
06/05/2019

Invariant Tensor Feature Coding

We propose a novel feature coding method that exploits invariance. We co...
research
07/25/2021

Invariance-based Multi-Clustering of Latent Space Embeddings for Equivariant Learning

Variational Autoencoders (VAEs) have been shown to be remarkably effecti...
research
04/12/2020

Feature Lenses: Plug-and-play Neural Modules for Transformation-Invariant Visual Representations

Convolutional Neural Networks (CNNs) are known to be brittle under vario...
research
08/10/2013

Learning Features and their Transformations by Spatial and Temporal Spherical Clustering

Learning features invariant to arbitrary transformations in the data is ...

Please sign up or login with your details

Forgot password? Click here to reset