On Computationally Efficient Learning of Exponential Family Distributions

09/12/2023
by   Abhin Shah, et al.
0

We consider the classical problem of learning, with arbitrary accuracy, the natural parameters of a k-parameter truncated minimal exponential family from i.i.d. samples in a computationally and statistically efficient manner. We focus on the setting where the support as well as the natural parameters are appropriately bounded. While the traditional maximum likelihood estimator for this class of exponential family is consistent, asymptotically normal, and asymptotically efficient, evaluating it is computationally hard. In this work, we propose a novel loss function and a computationally efficient estimator that is consistent as well as asymptotically normal under mild conditions. We show that, at the population level, our method can be viewed as the maximum likelihood estimation of a re-parameterized distribution belonging to the same class of exponential family. Further, we show that our estimator can be interpreted as a solution to minimizing a particular Bregman score as well as an instance of minimizing the surrogate likelihood. We also provide finite sample guarantees to achieve an error (in ℓ_2-norm) of α in the parameter estimation with sample complexity O( poly(k)/α^2). Our method achives the order-optimal sample complexity of O( log(k)/α^2) when tailored for node-wise-sparse Markov random fields. Finally, we demonstrate the performance of our estimator via numerical experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset