A Tensor-EM Method for Large-Scale Latent Class Analysis with Clustering Consistency

03/30/2021 ∙ by Zhenghao Zeng, et al. ∙ 0

Latent class models are powerful statistical modeling tools widely used in psychological, behavioral, and social sciences. In the modern era of data science, researchers often have access to response data collected from large-scale surveys or assessments, featuring many items (large J) and many subjects (large N). This is in contrary to the traditional regime with fixed J and large N. To analyze such large-scale data, it is important to develop methods that are both computationally efficient and theoretically valid. In terms of computation, the conventional EM algorithm for latent class models tends to have a slow algorithmic convergence rate for large-scale data and may converge to some local optima instead of the maximum likelihood estimator (MLE). Motivated by this, we introduce the tensor decomposition perspective into latent class analysis. Methodologically, we propose to use a moment-based tensor power method in the first step, and then use the obtained estimators as initialization for the EM algorithm in the second step. Theoretically, we establish the clustering consistency of the MLE in assigning subjects into latent classes when N and J both go to infinity. Simulation studies suggest that the proposed tensor-EM pipeline enjoys both good accuracy and computational efficiency for large-scale data. We also apply the proposed method to a personality dataset as an illustration.



There are no comments yet.


page 28

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.