Clustering-Oriented Representation Learning with Attractive-Repulsive Loss

12/18/2018
by   Kian Kenyon-Dean, et al.
0

The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective. In this work, we propose clustering-oriented representation learning (COREL) as an alternative to CCE in the context of a generalized attractive-repulsive loss framework. COREL has the consequence of building latent representations that collectively exhibit the quality of natural clustering within the latent space of the final hidden layer, according to a predefined similarity function. Despite being simple to implement, COREL variants outperform or perform equivalently to CCE in a variety of scenarios, including image and news article classification using both feed-forward and convolutional neural networks. Analysis of the latent spaces created with different similarity functions facilitates insights on the different use cases COREL variants can satisfy, where the Cosine-COREL variant makes a consistently clusterable latent space, while Gaussian-COREL consistently obtains better classification accuracy than CCE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2018

Evidence Transfer for Improving Clustering Tasks Using External Categorical Evidence

In this paper we introduce evidence transfer for clustering, a deep lear...
research
09/21/2020

Deep Clustering and Representation Learning that Preserves Geometric Structures

In this paper, we propose a novel framework for Deep Clustering and Repr...
research
07/19/2022

Expert-LaSTS: Expert-Knowledge Guided Latent Space for Traffic Scenarios

Clustering traffic scenarios and detecting novel scenario types are requ...
research
05/28/2018

Resolving Event Coreference with Supervised Representation Learning and Clustering-Oriented Regularization

We present an approach to event coreference resolution by developing a g...
research
04/09/2022

Self-Labeling Refinement for Robust Representation Learning with Bootstrap Your Own Latent

In this work, we have worked towards two major goals. Firstly, we have i...
research
04/23/2021

Eccentric Regularization: Minimizing Hyperspherical Energy without explicit projection

Several regularization methods have recently been introduced which force...
research
04/28/2020

The Immersion of Directed Multi-graphs in Embedding Fields. Generalisations

The purpose of this paper is to outline a generalised model for represen...

Please sign up or login with your details

Forgot password? Click here to reset