Gradient based block coordinate descent algorithms for joint approximate diagonalization of matrices

by   Jianze Li, et al.

In this paper, we propose a gradient based block coordinate descent (BCD-G) framework to solve the joint approximate diagonalization of matrices defined on the product of the complex Stiefel manifold and the special linear group. Instead of the cyclic fashion, we choose the block for optimization in a way based on the Riemannian gradient. To update the first block variable in the complex Stiefel manifold, we use the well-known line search descent method. To update the second block variable in the special linear group, based on four different kinds of elementary rotations, we construct two classes: Jacobi-GLU and Jacobi-GLQ, and then get two BCD-G algorithms: BCD-GLU and BCD-GLQ. We establish the weak convergence and global convergence of these two algorithms using the Łojasiewicz gradient inequality under the assumption that the iterates are bounded. In particular, the problem we focus on in this paper includes as special cases the well-known joint approximate diagonalization of Hermitian (or complex symmetric) matrices by invertible transformations in blind source separation, and our algorithms specialize as the gradient based Jacobi-type algorithms. All the algorithms and convergence results in this paper also apply to the real case.



There are no comments yet.


page 1

page 2

page 3

page 4


Jacobi-type algorithms for homogeneous polynomial optimization on Stiefel manifolds with applications to tensor approximations

In this paper, we mainly study the gradient based Jacobi-type algorithms...

Scaling Up Coordinate Descent Algorithms for Large ℓ_1 Regularization Problems

We present a generic framework for parallel coordinate descent (CD) algo...

On the convergence of Jacobi-type algorithms for Independent Component Analysis

Jacobi-type algorithms for simultaneous approximate diagonalization of s...

Inexact Coordinate Descent: Complexity and Preconditioning

In this paper we consider the problem of minimizing a convex function us...

Accelerating Block Coordinate Descent for Nonnegative Tensor Factorization

This paper is concerned with improving the empirical convergence speed o...

Gradient-based Taxis Algorithms for Network Robotics

Finding the physical location of a specific network node is a prototypic...

Block-Cyclic Stochastic Coordinate Descent for Deep Neural Networks

We present a stochastic first-order optimization algorithm, named BCSC, ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.