Discovering Multiple Constraints that are Frequently Approximately Satisfied

01/10/2013
by   Geoffrey E. Hinton, et al.
0

Some high-dimensional data.sets can be modelled by assuming that there are many different linear constraints, each of which is Frequently Approximately Satisfied (FAS) by the data. The probability of a data vector under the model is then proportional to the product of the probabilities of its constraint violations. We describe three methods of learning products of constraints using a heavy-tailed probability distribution for the violations.

READ FULL TEXT

page 3

page 4

page 7

research
12/04/2019

Copula-based anomaly scoring and localization for large-scale, high-dimensional continuous data

The anomaly detection method presented by this paper has a special featu...
research
03/12/2019

Flexible Clustering with a Sparse Mixture of Generalized Hyperbolic Distributions

Robust clustering of high-dimensional data is an important topic because...
research
05/30/2023

Identifying the Complete Correlation Structure in Large-Scale High-Dimensional Data Sets with Local False Discovery Rates

The identification of the dependent components in multiple data sets is ...
research
02/28/2018

Automatic topography of high-dimensional data sets by non-parametric Density Peak clustering

Data analysis in high-dimensional spaces aims at obtaining a synthetic d...
research
02/18/2022

Testing the boundaries: Normalizing Flows for higher dimensional data sets

Normalizing Flows (NFs) are emerging as a powerful class of generative m...
research
05/24/2018

Convex method for selection of fixed effects in high-dimensional linear mixed models

Analysis of high-dimensional data is currently a popular field of resear...

Please sign up or login with your details

Forgot password? Click here to reset