Perturbed factor analysis: Improving generalizability across studies

10/07/2019
by   Arkaprava Roy, et al.
0

Factor analysis is routinely used for dimensionality reduction. However, a major issue is `brittleness' in which one can obtain substantially different factors in analyzing similar datasets. Factor models have been developed for multi-study data by using additive expansions incorporating common and study-specific factors. However, allowing study-specific factors runs counter to the goal of producing a single set of factors that hold across studies. As an alternative, we propose a class of Perturbed Factor Analysis (PFA) models that assume a common factor structure across studies after perturbing the data via multiplication by a study-specific matrix. Bayesian inference algorithms can be easily modified in this case by using a matrix normal hierarchical model for the perturbation matrices. The resulting model is just as flexible as current approaches in allowing arbitrarily large differences across studies, but has substantial advantages that we illustrate in simulation studies and an application to NHANES data. We additionally show advantages of PFA in single study data analyses in which we assign each individual their own perturbation matrix, including reduced generalization error and improved identifiability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset