Compact Compositional Models

12/11/2014
by   Marc Goessling, et al.
0

Learning compact and interpretable representations is a very natural task, which has not been solved satisfactorily even for simple binary datasets. In this paper, we review various ways of composing experts for binary data and argue that competitive forms of interaction are best suited to learn low-dimensional representations. We propose a new composition rule that discourages experts from focusing on similar structures and that penalizes opposing votes strongly so that abstaining from voting becomes more attractive. We also introduce a novel sequential initialization procedure, which is based on a process of oversimplification and correction. Experiments show that with our approach very intuitive models can be learned.

READ FULL TEXT

page 3

page 5

page 6

page 9

page 10

research
02/16/2017

Dynamic Partition Models

We present a new approach for learning compact and intuitive distributed...
research
02/28/2023

Improving Expert Specialization in Mixture of Experts

Mixture of experts (MoE), introduced over 20 years ago, is the simplest ...
research
06/18/2018

BinGAN: Learning Compact Binary Descriptors with a Regularized GAN

In this paper, we propose a novel regularization method for Generative A...
research
05/30/2023

Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff

Previous work has shown that DNNs with large depth L and L_2-regularizat...
research
02/22/2023

Neural-based classification rule learning for sequential data

Discovering interpretable patterns for classification of sequential data...
research
05/18/2020

Causal Feature Learning for Utility-Maximizing Agents

Discovering high-level causal relations from low-level data is an import...

Please sign up or login with your details

Forgot password? Click here to reset