New Optimisation Methods for Machine Learning

by   Aaron Defazio, et al.

A thesis submitted for the degree of Doctor of Philosophy of The Australian National University. In this work we introduce several new optimisation methods for problems in machine learning. Our algorithms broadly fall into two categories: optimisation of finite sums and of graph structured objectives. The finite sum problem is simply the minimisation of objective functions that are naturally expressed as a summation over a large number of terms, where each term has a similar or identical weight. Such objectives most often appear in machine learning in the empirical risk minimisation framework in the non-online learning setting. The second category, that of graph structured objectives, consists of objectives that result from applying maximum likelihood to Markov random field models. Unlike the finite sum case, all the non-linearity is contained within a partition function term, which does not readily decompose into a summation. For the finite sum problem, we introduce the Finito and SAGA algorithms, as well as variants of each. For graph-structured problems, we take three complementary approaches. We look at learning the parameters for a fixed structure, learning the structure independently, and learning both simultaneously. Specifically, for the combined approach, we introduce a new method for encouraging graph structures with the "scale-free" property. For the structure learning problem, we establish SHORTCUT, a O(n^2.5) expected time approximate structure learning method for Gaussian graphical models. For problems where the structure is known but the parameters unknown, we introduce an approximate maximum likelihood learning algorithm that is capable of learning a useful subclass of Gaussian graphical models.


page 1

page 2

page 3

page 4


Learning Gaussian Graphical Models With Fractional Marginal Pseudo-likelihood

We propose a Bayesian approximate inference method for learning the depe...

Computing Maximum Likelihood Estimates for Gaussian Graphical Models with Macaulay2

We introduce the package GraphicalModelsMLE for computing the maximum li...

Learning Gaussian Tree Models: Analysis of Error Exponents and Extremal Structures

The problem of learning tree-structured Gaussian graphical models from i...

Learning Gaussian Graphical Models with Observed or Latent FVSs

Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widel...

Bethe Learning of Conditional Random Fields via MAP Decoding

Many machine learning tasks can be formulated in terms of predicting str...

A Unifying Probabilistic Perspective for Spectral Dimensionality Reduction: Insights and New Models

We introduce a new perspective on spectral dimensionality reduction whic...

What Cannot be Learned with Bethe Approximations

We address the problem of learning the parameters in graphical models wh...

Please sign up or login with your details

Forgot password? Click here to reset