Learning Ising Models with Independent Failures

02/13/2019
by   Surbhi Goel, et al.
4

We give the first efficient algorithm for learning the structure of an Ising model that tolerates independent failures; that is, each entry of the observed sample is missing with some unknown probability p. Our algorithm matches the essentially optimal runtime and sample complexity bounds of recent work for learning Ising models due to Klivans and Meka (2017). We devise a novel unbiased estimator for the gradient of the Interaction Screening Objective (ISO) due to Vuffray et al. (2016) and apply a stochastic multiplicative gradient descent algorithm to minimize this objective. Solutions to this minimization recover the neighborhood information of the underlying Ising model on a node by node basis.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2020

Agnostic Learning of Halfspaces with Gradient Descent via Soft Margins

We analyze the properties of gradient descent on convex surrogates for t...
research
10/22/2022

On-Demand Sampling: Learning Optimally from Multiple Distributions

Social and real-world considerations such as robustness, fairness, socia...
research
01/15/2019

Distributed Stochastic Gradient Descent Using LDGM Codes

We consider a distributed learning problem in which the computation is c...
research
03/08/2018

Learning with Rules

Complex classifiers may exhibit "embarassing" failures in cases that wou...
research
03/16/2020

Discrete-valued Preference Estimation with Graph Side Information

Incorporating graph side information into recommender systems has been w...
research
09/08/2023

Learning Zero-Sum Linear Quadratic Games with Improved Sample Complexity

Zero-sum Linear Quadratic (LQ) games are fundamental in optimal control ...
research
09/07/2023

Gradient-Based Feature Learning under Structured Data

Recent works have demonstrated that the sample complexity of gradient-ba...

Please sign up or login with your details

Forgot password? Click here to reset