Scaling the Indian Buffet Process via Submodular Maximization

04/11/2013
by   Colorado Reed, et al.
0

Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Welling (2008)'s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a one-third approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model.

READ FULL TEXT
research
06/05/2019

Greed is Not Always Good: On Submodular Maximization over Independence Systems

In this work, we consider the maximization of submodular functions const...
research
11/30/2018

Parallelizing greedy for submodular set function maximization in matroids and beyond

We consider parallel, or low adaptivity, algorithms for submodular funct...
research
07/09/2020

Practical Budgeted Submodular Maximization

We consider the Budgeted Submodular Maximization problem, that seeks to ...
research
10/12/2022

Non-smooth and Hölder-smooth Submodular Maximization

We study the problem of maximizing a continuous DR-submodular function t...
research
04/17/2018

An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation

In this paper we study the adaptivity of submodular maximization. Adapti...
research
06/01/2016

Scaling Submodular Maximization via Pruned Submodularity Graphs

We propose a new random pruning method (called "submodular sparsificatio...
research
04/05/2021

Optimal Sampling Gaps for Adaptive Submodular Maximization

Running machine learning algorithms on large and rapidly growing volumes...

Please sign up or login with your details

Forgot password? Click here to reset