SPARLING: Learning Latent Representations with Extremely Sparse Activations

02/03/2023
by   Kavi Gupta, et al.
0

Real-world processes often contain intermediate state that can be modeled as an extremely sparse tensor. We introduce Sparling, a new kind of informational bottleneck that explicitly models this state by enforcing extreme activation sparsity. We additionally demonstrate that this technique can be used to learn the true intermediate representation with no additional supervision (i.e., from only end-to-end labeled examples), and thus improve the interpretability of the resulting models. On our DigitCircle domain, we are able to get an intermediate state prediction accuracy of 98.84

READ FULL TEXT
research
09/12/2019

MOSS: End-to-End Dialog System Framework with Modular Supervision

A major bottleneck in training end-to-end task-oriented dialog system is...
research
04/17/2018

Learning Sparse Latent Representations with the Deep Copula Information Bottleneck

Deep latent variable models are powerful tools for representation learni...
research
01/17/2023

Tracing and Manipulating Intermediate Values in Neural Math Problem Solvers

How language models process complex input that requires multiple steps o...
research
11/15/2022

Breakpoint Transformers for Modeling and Tracking Intermediate Beliefs

Can we teach natural language understanding models to track their belief...
research
04/05/2021

Paired Examples as Indirect Supervision in Latent Decision Models

Compositional, structured models are appealing because they explicitly d...
research
01/08/2018

Deep Supervision with Intermediate Concepts

Recent data-driven approaches to scene interpretation predominantly pose...
research
07/25/2019

Interpretability Beyond Classification Output: Semantic Bottleneck Networks

Today's deep learning systems deliver high performance based on end-to-e...

Please sign up or login with your details

Forgot password? Click here to reset