Sparse Bayesian Learning with Diagonal Quasi-Newton Method For Large Scale Classification

07/17/2021
by   Jiahua Luo, et al.
0

Sparse Bayesian Learning (SBL) constructs an extremely sparse probabilistic model with very competitive generalization. However, SBL needs to invert a big covariance matrix with complexity O(M^3 ) (M: feature size) for updating the regularization priors, making it difficult for practical use. There are three issues in SBL: 1) Inverting the covariance matrix may obtain singular solutions in some cases, which hinders SBL from convergence; 2) Poor scalability to problems with high dimensional feature space or large data size; 3) SBL easily suffers from memory overflow for large-scale data. This paper addresses these issues with a newly proposed diagonal Quasi-Newton (DQN) method for SBL called DQN-SBL where the inversion of big covariance matrix is ignored so that the complexity and memory storage are reduced to O(M). The DQN-SBL is thoroughly evaluated on non-linear classifiers and linear feature selection using various benchmark datasets of different sizes. Experimental results verify that DQN-SBL receives competitive generalization with a very sparse model and scales well to large-scale problems.

READ FULL TEXT
research
02/25/2022

High-Dimensional Sparse Bayesian Learning without Covariance Matrices

Sparse Bayesian learning (SBL) is a powerful framework for tackling the ...
research
02/14/2018

Linear-Time Algorithm for Learning Large-Scale Sparse Graphical Models

The sparse inverse covariance estimation problem is commonly solved usin...
research
06/15/2020

FANOK: Knockoffs in Linear Time

We describe a series of algorithms that efficiently implement Gaussian m...
research
07/19/2017

Regularization of the Kernel Matrix via Covariance Matrix Shrinkage Estimation

The kernel trick concept, formulated as an inner product in a feature sp...
research
08/05/2020

FRMDN: Flow-based Recurrent Mixture Density Network

Recurrent Mixture Density Networks (RMDNs) are consisted of two main par...
research
05/21/2021

Covariance-Free Sparse Bayesian Learning

Sparse Bayesian learning (SBL) is a powerful framework for tackling the ...
research
05/09/2020

Time complexity of in-memory solution of linear systems

In-memory computing with crosspoint resistive memory arrays has been sho...

Please sign up or login with your details

Forgot password? Click here to reset