Gradient based block coordinate descent algorithms for joint approximate diagonalization of matrices

09/28/2020
by   Jianze Li, et al.
0

In this paper, we propose a gradient based block coordinate descent (BCD-G) framework to solve the joint approximate diagonalization of matrices defined on the product of the complex Stiefel manifold and the special linear group. Instead of the cyclic fashion, we choose the block for optimization in a way based on the Riemannian gradient. To update the first block variable in the complex Stiefel manifold, we use the well-known line search descent method. To update the second block variable in the special linear group, based on four different kinds of elementary rotations, we construct two classes: Jacobi-GLU and Jacobi-GLQ, and then get two BCD-G algorithms: BCD-GLU and BCD-GLQ. We establish the weak convergence and global convergence of these two algorithms using the Łojasiewicz gradient inequality under the assumption that the iterates are bounded. In particular, the problem we focus on in this paper includes as special cases the well-known joint approximate diagonalization of Hermitian (or complex symmetric) matrices by invertible transformations in blind source separation, and our algorithms specialize as the gradient based Jacobi-type algorithms. All the algorithms and convergence results in this paper also apply to the real case.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2021

Jacobi-type algorithms for homogeneous polynomial optimization on Stiefel manifolds with applications to tensor approximations

In this paper, we mainly study the gradient based Jacobi-type algorithms...
research
06/27/2012

Scaling Up Coordinate Descent Algorithms for Large ℓ_1 Regularization Problems

We present a generic framework for parallel coordinate descent (CD) algo...
research
12/16/2019

On the convergence of Jacobi-type algorithms for Independent Component Analysis

Jacobi-type algorithms for simultaneous approximate diagonalization of s...
research
04/19/2013

Inexact Coordinate Descent: Complexity and Preconditioning

In this paper we consider the problem of minimizing a convex function us...
research
01/13/2020

Accelerating Block Coordinate Descent for Nonnegative Tensor Factorization

This paper is concerned with improving the empirical convergence speed o...
research
11/20/2017

Block-Cyclic Stochastic Coordinate Descent for Deep Neural Networks

We present a stochastic first-order optimization algorithm, named BCSC, ...
research
04/16/2019

Global Error Bounds and Linear Convergence for Gradient-Based Algorithms for Trend Filtering and ℓ_1-Convex Clustering

We propose a class of first-order gradient-type optimization algorithms ...

Please sign up or login with your details

Forgot password? Click here to reset