Learning performance in inverse Ising problems with sparse teacher couplings

12/25/2019
by   Alia Abbara, et al.
0

We investigate the learning performance of the pseudolikelihood maximization method for inverse Ising problems. In the teacher-student scenario under the assumption that the teacher's couplings are sparse and the student does not know the graphical structure, the learning curve and order parameters are assessed in the typical case using the replica and cavity methods from statistical mechanics. Our formulation is also applicable to a certain class of cost functions having locality; the standard likelihood does not belong to that class. The derived analytical formulas indicate that the perfect inference of the presence/absence of the teacher's couplings is possible in the thermodynamic limit taking the number of spins N as infinity while keeping the dataset size M proportional to N, as long as α=M/N > 2. Meanwhile, the formulas also show that the estimated coupling values corresponding to the truly existing ones in the teacher tend to be overestimated in the absolute value, manifesting the presence of estimation bias. These results are considered to be exact in the thermodynamic limit on locally tree-like networks, such as the regular random or Erdős–Rényi graphs. Numerical simulation results fully support the theoretical predictions. Additional biases in the estimators on loopy graphs are also discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2021

Locality defeats the curse of dimensionality in convolutional teacher-student scenarios

Convolutional neural networks perform a local and translationally-invari...
research
11/15/2021

A teacher-student framework for online correctional learning

A classical learning setting is one in which a student collects data, or...
research
12/17/2018

Learning Student Networks via Feature Embedding

Deep convolutional neural networks have been widely used in numerous app...
research
04/29/2021

Soft Mode in the Dynamics of Over-realizable On-line Learning for Soft Committee Machines

Over-parametrized deep neural networks trained by stochastic gradient de...
research
05/31/2019

Luck Matters: Understanding Training Dynamics of Deep ReLU Networks

We analyze the dynamics of training deep ReLU networks and their implica...
research
08/07/2023

The Copycat Perceptron: Smashing Barriers Through Collective Learning

We characterize the equilibrium properties of a model of y coupled binar...

Please sign up or login with your details

Forgot password? Click here to reset