DeepAI AI Chat
Log In Sign Up

Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent

07/11/2021
by   Qiyou Duan, et al.
City University of Hong Kong
Télécom ParisTech
The University of Kansas
0

Data representation techniques have made a substantial contribution to advancing data processing and machine learning (ML). Improving predictive power was the focus of previous representation techniques, which unfortunately perform rather poorly on the interpretability in terms of extracting underlying insights of the data. Recently, Kolmogorov model (KM) was studied, which is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables. The existing KM learning algorithms using semi-definite relaxation with randomization (SDRwR) or discrete monotonic optimization (DMO) have, however, limited utility to big data applications because they do not scale well computationally. In this paper, we propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with enhanced gradient descent (GD) method. To make our method more scalable to large-dimensional problems, we propose two acceleration schemes, namely, eigenvalue decomposition (EVD) elimination strategy and proximal EVD algorithm. Furthermore, a thresholding technique by exploiting the approximation error analysis and leveraging the normalized Minkowski ℓ_1-norm and its bounds, is provided for the selection of the number of iterations of the proximal EVD algorithm. When applied to big data applications, it is demonstrated that the proposed method can achieve compatible training/prediction performance with significantly reduced computational complexity; roughly two orders of magnitude improvement in terms of the time overhead, compared to the existing KM learning algorithms. Furthermore, it is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds 80%.

READ FULL TEXT
11/07/2022

Decentralized Complete Dictionary Learning via ℓ^4-Norm Maximization

With the rapid development of information technologies, centralized data...
02/22/2022

An accelerated proximal gradient method for multiobjective optimization

Many descent methods for multiobjective optimization problems have been ...
08/11/2022

An Accelerated Doubly Stochastic Gradient Method with Faster Explicit Model Identification

Sparsity regularized loss minimization problems play an important role i...
04/22/2020

Fast Quantum Algorithm for Learning with Optimized Random Features

Kernel methods augmented with random features give scalable algorithms f...
10/19/2020

Learning to solve TV regularized problems with unrolled algorithms

Total Variation (TV) is a popular regularization strategy that promotes ...
07/30/2014

Automated Machine Learning on Big Data using Stochastic Algorithm Tuning

We introduce a means of automating machine learning (ML) for big data ta...
08/11/2022

RandomSCM: interpretable ensembles of sparse classifiers tailored for omics data

Background: Understanding the relationship between the Omics and the phe...