DeepAI AI Chat
Log In Sign Up

Prior specification via prior predictive matching: Poisson matrix factorization and beyond

Hyperparameter optimization for machine learning models is typically carried out by some sort of cross-validation procedure or global optimization, both of which require running the learning algorithm numerous times. We show that for Bayesian hierarchical models there is an appealing alternative that allows selecting good hyperparameters without learning the model parameters during the process at all, facilitated by the prior predictive distribution that marginalizes out the model parameters. We propose an approach that matches suitable statistics of the prior predictive distribution with ones provided by an expert and apply the general concept for matrix factorization models. For some Poisson matrix factorization models we can analytically obtain exact hyperparameters, including the number of factors, and for more complex models we propose a model-independent optimization procedure.

READ FULL TEXT
05/17/2023

Automatic Hyperparameter Tuning in Sparse Matrix Factorization

We study the problem of hyperparameter tuning in sparse matrix factoriza...
09/27/2019

In-training Matrix Factorization for Parameter-frugal Neural Machine Translation

In this paper, we propose the use of in-training matrix factorization to...
05/26/2022

A proof of consistency and model-selection optimality on the empirical Bayes method

We study the consistency and optimality of the maximum marginal likeliho...
03/15/2012

A Bayesian Matrix Factorization Model for Relational Data

Relational learning can be used to augment one data source with other co...
12/18/2015

Deep Poisson Factorization Machines: factor analysis for mapping behaviors in journalist ecosystem

Newsroom in online ecosystem is difficult to untangle. With prevalence o...