DeepAI AI Chat
Log In Sign Up

On Learning Prediction-Focused Mixtures

by   Abhishek Sharma, et al.

Probabilistic models help us encode latent structures that both model the data and are ideally also useful for specific downstream tasks. Among these, mixture models and their time-series counterparts, hidden Markov models, identify discrete components in the data. In this work, we focus on a constrained capacity setting, where we want to learn a model with relatively few components (e.g. for interpretability purposes). To maintain prediction performance, we introduce prediction-focused modeling for mixtures, which automatically selects the dimensions relevant to the prediction task. Our approach identifies relevant signal from the input, outperforms models that are not prediction-focused, and is easy to optimize; we also characterize when prediction-focused modeling can be expected to work.


page 1

page 2

page 3

page 4


A Method of Moments for Mixture Models and Hidden Markov Models

Mixture models are a fundamental tool in applied statistics and machine ...

Prediction Focused Topic Models via Vocab Selection

Supervised topic models are often sought to balance prediction quality a...

Prediction Focused Topic Models for Electronic Health Records

Electronic Health Record (EHR) data can be represented as discrete count...

Masked prediction tasks: a parameter identifiability view

The vast majority of work in self-supervised learning, both theoretical ...

Learning Graphical Models of Images, Videos and Their Spatial Transformations

Mixtures of Gaussians, factor analyzers (probabilistic PCA) and hidden M...

Investigation on the use of Hidden-Markov Models in automatic transcription of music

Hidden Markov Models (HMMs) are a ubiquitous tool to model time series d...

Pomegranate: fast and flexible probabilistic modeling in python

We present pomegranate, an open source machine learning package for prob...