# Tensor Balancing on Statistical Manifold

We solve tensor balancing, rescaling an Nth order nonnegative tensor by multiplying N tensors of order N - 1 so that every fiber sums to one. This generalizes a fundamental process of matrix balancing used to compare matrices in a wide range of applications from biology to economics. We present an efficient balancing algorithm with quadratic convergence using Newton's method and show in numerical experiments that the proposed algorithm is several orders of magnitude faster than existing ones. To theoretically prove the correctness of the algorithm, we model tensors as probability distributions in a statistical manifold and realize tensor balancing as projection onto a submanifold. The key to our algorithm is that the gradient of the manifold, used as a Jacobian matrix in Newton's method, can be analytically obtained using the Moebius inversion formula, the essential of combinatorial mathematics. Our model is not limited to tensor balancing, but has a wide applicability as it includes various statistical and machine learning models such as weighted DAGs and Boltzmann machines.

## Authors

• 17 publications
• 3 publications
• 9 publications
• ### T-Singular Values and T-Sketching for Third Order Tensors

Based upon the T-SVD (tensor SVD) of third order tensors, introduced by ...
03/01/2021 ∙ by Liqun Qi, et al. ∙ 0

• ### Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization

Tensors or multiarray data are generalizations of matrices. Tensor clust...
04/07/2015 ∙ by Yanfeng Sun, et al. ∙ 0

• ### Sparse Nonnegative Tensor Factorization and Completion with Noisy Observations

In this paper, we study the sparse nonnegative tensor factorization and ...
07/21/2020 ∙ by Xiongjun Zhang, et al. ∙ 0

• ### Characterizing digital microstructures by the Minkowski-based quadratic normal tensor

For material modeling of microstructured media, an accurate characteriza...
07/30/2020 ∙ by Felix Ernesti, et al. ∙ 0

• ### Step-truncation integrators for evolution equations on low-rank tensor manifolds

We develop a new class of algorithms, which we call step-truncation meth...
08/01/2020 ∙ by Abram Rodgers, et al. ∙ 0

• ### Rank Reduction, Matrix Balancing, and Mean-Field Approximation on Statistical Manifold

We present a unified view of three different problems; rank reduction of...
06/09/2020 ∙ by Kazu Ghalamkari, et al. ∙ 0

• ### A Simple and Efficient Tensor Calculus for Machine Learning

Computing derivatives of tensor expressions, also known as tensor calcul...
10/07/2020 ∙ by Sören Laue, et al. ∙ 0

## Code Repositories

### newton-balancing

Matrix balancing algorithms

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Matrix balancing is the problem of rescaling a given square nonnegative matrix to a

doubly stochastic matrix

, where every row and column sums to one, by multiplying two diagonal matrices and . This is a fundamental process for analyzing and comparing matrices in a wide range of applications, including input-output analysis in economics, called the RAS approach [Parikh, 1979, Miller and Blair, 2009, Lahr and de Mesnard, 2004], seat assignments in elections [Balinski, 2008, Akartunalı and Knight, 2016], Hi-C data analysis [Rao et al., 2014, Wu and Michor, 2016], the Sudoku puzzle [Moon et al., 2009], and the optimal transportation problem [Cuturi, 2013, Frogner et al., 2015, Solomon et al., 2015]. An excellent review of this theory and its applications is given by Idel [2016].

The standard matrix balancing algorithm is the Sinkhorn-Knopp algorithm [Sinkhorn, 1964, Sinkhorn and Knopp, 1967, Marshall and Olkin, 1968, Knight, 2008], a special case of Bregman’s balancing method [Lamond and Stewart, 1981] that iterates rescaling of each row and column until convergence. The algorithm is widely used in the above applications due to its simple implementation and theoretically guaranteed convergence. However, the algorithm converges linearly [Soules, 1991], which is prohibitively slow for recently emerging large and sparse matrices. Although Livne and Golub [2004] and Knight and Ruiz [2013] tried to achieve faster convergence by approximating each step of Newton’s method, the exact Newton’s method with quadratic convergence has not been intensively studied yet.

Another open problem is tensor balancing, which is a generalization of balancing from matrices to higher-order multidimentional arrays, or tensors. The task is to rescale an th order nonnegative tensor to a multistochastic tensor, in which every fiber sums to one, by multiplying th order tensors. There are some results about mathematical properties of multistochastic tensors [Cui et al., 2014, Chang et al., 2016, Ahmed et al., 2003]. However, there is no result for tensor balancing algorithms with guaranteed convergence that transforms a given tensor to a multistochastic tensor until now.

Here we show that Newton’s method with quadratic convergence can be applied to tensor balancing while avoiding solving a linear system on the full tensor. Our strategy is to realize matrix and tensor balancing as projection onto a dually flat Riemmanian submanifold (Figure 1), which is a statistical manifold and known to be the essential structure for probability distributions in information geometry [Amari, 2016]. Using a partially ordered outcome space, we generalize the log-linear model [Agresti, 2012]

used to model the higher-order combinations of binary variables

[Amari, 2001, Ganmor et al., 2011, Nakahara and Amari, 2002, Nakahara et al., 2003], which allows us to model tensors as probability distributions in the statistical manifold. The remarkable property of our model is that the gradient of the manifold can be analytically computed using the Möbius inversion formula [Rota, 1964], the heart of combinatorial mathematics [Ito, 1993], which enables us to directly obtain the Jacobian matrix in Newton’s method. Moreover, we show that entries for the size of a tensor are invariant with respect to one of the two coordinate systems of the statistical manifold. Thus the number of equations in Newton’s method is .

The remainder of this paper is organized as follows: We begin with a low-level description of our matrix balancing algorithm in Section 2 and demonstrate its efficiency in numerical experiments in Section 3. To guarantee the correctness of the algorithm and extend it to tensor balancing, we provide theoretical analysis in Section 4. In Section 4.1, we introduce a generalized log-linear model associated with a partial order structured outcome space, followed by introducing the dually flat Riemannian structure in Section 4.2. In Section 4.3, we show how to use Newton’s method to compute projection of a probability distribution onto a submanifold. Finally, we formulate the matrix and tensor balancing problem in Section 5 and summarize our contributions in Section 6.

## 2 The Matrix Balancing Algorithm

Given a nonnegative square matrix , the task of matrix balancing is to find that satisfy

 (RAS)1=1,(RAS)T1=1, (1)

where and . The balanced matrix is called doubly stochastic, in which each entry and all the rows and columns sum to one. The most popular algorithm is the Sinkhorn-Knopp algorithm, which repeats updating and as and . We denote by hereafter.

In our algorithm, instead of directly updating and , we update two parameters and defined as

 logpij=∑i′≤i∑j′≤jθi′j′,ηij=∑i′≥i∑j′≥jpi′j′ (2)

for each , where we normalized entries as so that . We assume for simplicity that each entry is strictly larger than zero. The assumption will be removed in Section 5.

The key to our approach is that we update with or by Newton’s method at each iteration while fixing with so that satisfies the following condition (Figure 2):

 η(t)i1=n−i+1n,η(t)1j=n−j+1n.

Note that the rows and columns sum not to but to due to the normalization. The update formula is described as

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣θ(t+1)11θ(t+1)12⋮θ(t+1)1nθ(t+1)21⋮θ(t+1)n1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣θ(t)11θ(t)12⋮θ(t)1nθ(t)21⋮θ(t)n1⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦−J−1⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣η(t)11−(n−1+1)/nη(t)12−(n−2+1)/n⋮η(t)1n−(n−n+1)/nη(t)21−(n−2+1)/n⋮η(t)n1−(n−n+1)/n⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, (3)

where is the Jacobian matrix given as

 J(ij)(i′j′)=∂η(t)ij∂θ(t)i′j′=ηmax{i,i′}max{j,j′}−n2ηijηi′j′, (4)

which is derived from our theoretical result in Theorem 3. Since is a matrix, the time complexity of each update is , which is needed to compute the inverse of .

After updating to , we can compute and by Equation (2). Since this update does not ensure the condition , we again update as

 θ(t+1)11=θ(t+1)11−log∑ijp(t+1)ij

and recompute and for each .

By iterating the above update process in Equation (3) until convergence, with becomes doubly stochastic.

## 3 Numerical Experiments

We evaluate the efficiency of our algorithm compared to the two prominent balancing methods, the standard Sinkhorn-Knopp algorithm [Sinkhorn, 1964] and the state-of-the-art algorithm BNEWT [Knight and Ruiz, 2013], which uses Newton’s method-like iterations with conjugate gradients. All experiments were conducted on Amazon Linux AMI release 2016.09 with a single core of 2.3 GHz Intel Xeon CPU E5-2686 v4 and 256 GB of memory. All methods were implemented in C++ with the Eigen library and compiled with gcc 4.8.3111An implementation of algorithms for matrices and third order tensors is available at: https://github.com/mahito-sugiyama/newton-balancing. We have carefully implemented BNEWT by directly translating the MATLAB code provided in [Knight and Ruiz, 2013] into C++ with the Eigen library for fair comparison, and used the default parameters. We measured the residual of a matrix by the squared norm , where each entry is obtained as in our algorithm, and ran each of three algorithms until the residual is below the tolerance threshold .
Hessenberg Matrix. The first set of experiments used a Hessenberg matrix, which has been a standard benchmark for matrix balancing [Parlett and Landis, 1982, Knight and Ruiz, 2013]. Each entry of an Hessenberg matrix is given as if and otherwise. We varied the size from to , and measured running time (in seconds) and the number of iterations of each method.

Results are plotted in Figure 3. Our balancing algorithm with the Newton’s method (plotted in blue in the figures) is clearly the fastest: It is three to five orders of magnitude faster than the standard Sinkhorn-Knopp algorithm (plotted in red). Although the BNEWT algorithm (plotted in green) is competitive if is small, it suddenly fails to converge whenever , which is consistent with results in the original paper [Knight and Ruiz, 2013] where there is no result for the setting on the same matrix. Moreover, our method converges around to steps, which is about three and seven orders of magnitude smaller than BNEWT and Sinkhorn-Knopp, respectively, at .

To see the behavior of the rate of convergence in detail, we plot the convergence graph in Figure 4 for , where we observe the slow convergence rate of the Sinkhorn-Knopp algorithm and unstable convergence of the BNEWT algorithm, which contrasts with our quick convergence.
Trefethen Matrix. Next, we collected a set of Trefethen matrices from a collection website, which are nonnegative diagonal matrices with primes. Results are plotted in Figure 5, where we observe the same trend as before: Our algorithm is the fastest and about four orders of magnitude faster than the Sinkhorn-Knopp algorithm. Note that larger matrices with do not have total support, which is the necessary condition for matrix balancing [Knight and Ruiz, 2013], while the BNEWT algorithm fails to converge if or .

## 4 Theoretical Analysis

In the following, we provide theoretical support to our algorithm by formulating the problem as a projection within a statistical manifold, in which a matrix corresponds to an element, that is, a probability distribution, in the manifold.

We show that a balanced matrix forms a submanifold and matrix balancing is projection of a given distribution onto the submanifold, where the Jacobian matrix in Equation (4) is derived from the gradient of the manifold.

### 4.1 Formulation

We introduce our log-linear probabilistic model, where the outcome space is a partially ordered set, or a poset [Gierz et al., 2003]. We prepare basic notations and the key mathematical tool for posets, the Möbius inversion formula, followed by formulating the log-linear model.

#### 4.1.1 Möbius Inversion

A poset , the set of elements and a partial order on , is a fundamental structured space in computer science. A partial order” is a relation between elements in that satisfies the following three properties: For all , (1) (reflexivity), (2) , (antisymmetry), and (3) , (transitivity). In what follows, is always finite and includes the least element (bottom) ; that is, for all . We denote by .

Rota [1964] introduced the Möbius inversion formula on posets by generalizing the inclusion-exclusion principle. Let be the zeta function defined as

 ζ(s,x)={1if s≤x,0otherwise.

The Möbius function satisfies , which is inductively defined for all with as

 μ(x,y)=⎧⎪ ⎪⎨⎪ ⎪⎩1if x=y,−∑x≤s

From the definition, it follows that

 ∑s∈Sζ(s,y)μ(x,s) =∑x≤s≤yμ(x,s)=δxy, (5) ∑s∈Sζ(x,s)μ(s,y) =∑x≤s≤yμ(s,y)=δxy

with the Kronecker delta such that if and otherwise. Then for any functions , , and with the domain such that

 g(x) =∑s∈Sζ(s,x)f(s)=∑s≤xf(s), h(x) =∑s∈Sζ(x,s)f(s)=∑s≥xf(s),

is uniquely recovered with the Möbius function:

 f(x)=∑s∈Sμ(s,x)g(s),f(x)=∑s∈Sμ(x,s)h(s).

This is called the Möbius inversion formula and is at the heart of enumerative combinatorics [Ito, 1993].

#### 4.1.2 Log-Linear Model on Posets

We consider a probability vector

on that gives a discrete probability distribution with the outcome space . A probability vector is treated as a mapping such that , where every entry is assumed to be strictly larger than zero.

Using the zeta and the Möbius functions, let us introduce two mappings and as

 θ(x) =∑s∈Sμ(s,x)logp(s), (6) η(x) =∑s∈Sζ(x,s)p(s)=∑s≥xp(s). (7)

From the Möbius inversion formula, we have

 logp(x) =∑s∈Sζ(s,x)θ(s)=∑s≤xθ(s), (8) p(x) =∑s∈Sμ(x,s)η(s). (9)

They are generalization of the log-linear model [Agresti, 2012] that gives the probability of an -dimensional binary vector as

 logp(x)=∑iθixi+∑i

where is a parameter vector, is a normalizer, and represents the expectation of variable combinations such that

 ηi =E[xi]=Pr(xi=1), ηij =E[xixj]=Pr(xi=xj=1), i

They coincide with Equations (8) and (7) when we let with , each as the set of indices of “” of , and the order as the inclusion relationship, that is, if and only if . Nakahara et al. [2006] have pointed out that can be computed from using the inclusion-exclusion principle in the log-linear model. We exploit this combinatorial property of the log-linear model using the Möbius inversion formula on posets and extend the log-linear model from the power set to any kind of posets . Sugiyama et al. [2016] studied a relevant log-linear model, but the relationship with Möbius inversion formula has not been analyzed yet.

### 4.2 Dually Flat Riemannian Manifold

We theoretically analyze our log-linear model introduced in Equations (6), (7) and show that they form dual coordinate systems on a dually flat manifold, which has been mainly studied in the area of information geometry [Amari, 2001, Nakahara and Amari, 2002, Amari, 2014, 2016]. Moreover, we show that the Riemannian metric and connection of our model can be analytically computed in closed forms.

In the following, we denote by the function or and by the gradient operator with respect to , i.e., for , and denote by the set of probability distributions specified by probability vectors, which forms a statistical manifold. We use uppercase letters for points (distributions) in and their lowercase letters for the corresponding probability vectors treated as mappings. We write and if they are connected with by Equations (6) and (7), respectively, and abbreviate subscripts if there is no ambiguity.

#### 4.2.1 Dually Flat Structure

We show that has the dually flat Riemannian structure induced by two functions and in Equation (6) and (7). We define as

 ψ(θ)=−θ(⊥)=−logp(⊥), (10)

which corresponds to the normalizer of . It is a convex function since we have

 ψ(θ)=log∑x∈Sexp(∑⊥

from . We apply the Legendre transformation to given as

 φ(η)=maxθ′(θ′η−ψ(θ′)),θ′η=∑\mathclapx∈S+θ′(x)η(x). (11)

Then coincides with the negative entropy.

###### Theorem 1 (Legendre dual).
 φ(η)=∑x∈Sp(x)logp(x).
###### Proof.

From Equation (5), we have

 θ′η=∑x∈S+(∑⊥

Thus it holds that

 θ′η−ψ(θ′) =∑x∈Sp(x)logp′(x). (12)

Hence it is maximized with . ∎

Since they are connected with each other by the Legendre transformation, they form a dual coordinate system and of  [Amari, 2016, Section 1.5], which coincides with and as follows.

###### Theorem 2 (dual coordinate system).
 ∇ψ(θ)=η,∇φ(η)=θ. (13)
###### Proof.

They can be directly derived from our definitions (Equations (6) and (11)) as

 ∂ψ(θ)∂θ(x) ∂φ(η)∂η(x) =∂∂η(x)(θη−ψ(θ))=θ(x). \qed

Moreover, we can confirm the orthogonality of and as

 E[∂∂θ(x)logp(s)∂∂η(y)logp(s)] =∑s∈S[p(s)∂∂θ(x)∑u∈Sζ(u,s)θ(u)∂∂η(y)log(∑u∈Sμ(s,u)η(u))] =∑s∈S[p(s)(ζ(x,s)−η(x))μ(s,y)p(s)] =∑s∈Sζ(x,s)μ(s,y)=δxy.

The last equation holds from Equation (5), hence the Möbius inversion directly leads to the orthogonality.

The Bregman divergence is known to be the canonical divergence [Amari, 2016, Section 6.6] to measure the difference between two distributions and on a dually flat manifold, which is defined as

 D[P,Q]=ψ(θP)+φ(ηQ)−θPηQ.

In our case, since we have and from Theorem 1 and Equation (12), it is given as

 D[P,Q]=∑x∈Sq(x)logq(x)p(x),

which coincides with the Kullback–Leibler divergence (KL divergence) from to : .

#### 4.2.2 Riemannian Structure

Next we analyze the Riemannian structure on and show that the Möbius inversion formula enables us to compute the Riemannian metric of .

###### Theorem 3 (Riemannian metric).

The manifold is a Riemannian manifold with the Riemannian metric such that for all

 gxy(ξ)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩∑s∈Sζ(x,s)ζ(y,s)p(s)−η(x)η(y)if ξ=θ,∑s∈Sμ(s,x)μ(s,y)p(s)−1if ξ=η.
###### Proof.

Since the Riemannian metric is defined as

 g(θ)=∇∇ψ(θ),g(η)=∇∇φ(η),

when we have

 gxy(θ) =∂2∂θ(x)∂θ(y)ψ(θ)=∂∂θ(x)η(y)=∂∂θ(x)∑s∈Sζ(y,s)exp(∑⊥

When , it follows that

 gx,y(η) =∂2∂η(x)∂η(y)φ(η)=∂∂η(x)θ(y) =∂∂η(x)∑s≤yμ(s,y)logp(s)=∂∂η(x)∑s≤yμ(s,y)log(∑u≥sμ(s,u)η(u)) =∑s∈Sμ(s,x)μ(s,y)∑u≥sμ(s,u)η(u)=∑s∈Sμ(s,x)μ(s,y)p(s)−1.\qed

Since coincides with the Fisher information matrix,

 E[∂∂θ(x)logp(s)∂∂θ(y)logp(s)] =gxy(θ), E[∂∂η(x)logp(s)∂∂η(y)logp(s)] =gxy(η).

Then the Riemannian (Levi–Chivita) connection with respect to , which is defined as

 Γxyz(ξ)=12(∂gyz(ξ)∂ξ(x)+∂gxz(ξ)∂ξ(y)−∂gxy(ξ)∂ξ(z))

for all , can be analytically obtained.

###### Theorem 4 (Riemannian connection).

The Riemannian connection on the manifold is given in the following for all ,

 Γxyz(ξ)=⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩−12∑s∈S(ζ(x,s)−η(x))(ζ(y,s)−η(y))(ζ(z,s)−η(z))p(s)if ξ=θ,−12∑s∈Sμ(s,x)μ(s,y)μ(s,z)p(s)−2if ξ=η.
###### Proof.

We have for all ,

 ∂gy,z(θ)∂θ(x)=∂∂θ(x)∑s∈Sζ(y,s)ζ(z,s)p(s)−∂∂θ(x)η(y)η(z),

where

 ∂∂θ(x)∑s∈Sζ(y,s)ζ(z,s)p(s) =∂∂θ(x)∑s∈Sζ(x,s)ζ(y,s)ζ(z,s)exp(∑⊥

and

 ∂∂θ(x)η(y)η(z) =∂η(y)∂θ(x)η(z)+∂η(z)∂θ(x)η(y) =η(z)∑s∈Sζ(x,s)ζ(y,s)p(s)+η(y)∑s∈Sζ(x,s)ζ(z,s)p(s)−2η(x)η(y)η(z).

It follows that

 ∂gy,z(θ)∂θ(x)=∑s∈S(ζ(x,s)−η(x))(ζ(y,s)−η(y))(ζ(z,s)−η(z))p(s).

On the other hand,

 ∂gy,z(η)∂η(x) =∂∂η(x)∑s∈Sμ(s,y)μ(s,z)p(s)−1=∂∂η(x)∑s∈Sμ(s,y)μ(s,z)(∑u≥sμ(s,u)η(s))−1 =−∑s∈Sμ(s,x)μ(s,y)μ(s,z)(∑u≥sμ(s,u)η(s))−2=−∑s∈Sμ(s,x)μ(s,y)μ(s,z)p(s)−2.

Therefore, from the definition of , it follows that

 Γx,y,z(θ) =12(∂gy,z(θ)∂θ(x)+∂gx,z(θ)∂θ(y)−∂gx,y(θ)∂θ(z)) =12∑s∈S(ζ(s,x)−η(x))(ζ(s,y)−η(y))(ζ(s,z)−η(z))p(s), Γx,y,z(η) =12(∂gy,z(η)∂η(x)+∂gx,z(η)∂η(y)−∂gx,y(η)∂η(z)) =−12∑s∈Sμ(s,x)μ(s,y)μ(s,z)p(s)−2.\qed

### 4.3 The Projection Algorithm

Projection of a distribution onto a submanifold is essential; several machine learning algorithms are known to be formulated as projection of a distribution empirically estimated from data onto a submanifold that is specified by the target model

[Amari, 2016]. Here we define projection of distributions on posets and show that Newton’s method can be applied to perform projection as the Jacobian matrix can be analytically computed.

#### 4.3.1 Definition

Let be a submanifold of such that

 S(β)=\SetP∈SθP(x)=β(x) for all x∈dom(β) (14)

specified by a function with . Projection of onto , called -projection, which is defined as the distribution such that

 {θPβ(x)=β(x)if% x∈dom(β),ηPβ(x)=ηP(x)if x∈S+∖dom(β),

is the minimizer of the KL divergence from to :

 Pβ =argminQ∈S(β)DKL[P,Q].

The dually flat structure with the coordinate systems and guarantees that the projected distribution always exists and is unique [Amari, 2009, Theorem 3]. Moreover, the Pythagorean theorem holds in the dually flat manifold, that is, for any we have

 DKL[P,Q]=DKL[P,Pβ]+DKL[Pβ,Q].

We can switch and in the submanifold by changing to , where the projected distribution of is given as

 {θPβ(x)=θP(x)if x∈S+∖dom(β),ηPβ(x)=β(x)if x∈dom(β),

This projection is called -projection.

###### Example 1 (Boltzmann machine).

Given a Boltzmann machine represented as an undirected graph with a vertex set and an edge set . The set of probability distributions that can be modeled by a Boltzmann machine coincides with the submanifold

 SB=\SetP∈SθP(x)=0 if |x|>2 or x∉E,

with . Let be an empirical distribution estimated from a given dataset. The learned model is the -projection of the empirical distribution onto , where the resulting distribution is given as

#### 4.3.2 Computation

Here we show how to compute projection of a given probability distribution. We show that Newton’s method can be used to efficiently compute the projected distribution by iteratively updating as until converging to .

Let us start with the -projection with initializing . In each iteration , we update for all while fixing for all , which is possible from the orthogonality of and . Using Newton’s method, should satisfy

 (θ(t)Pβ(x)−β(x))+∑\mathclapy∈dom(β)Jxy(η(t+1)Pβ(y)−η(t)Pβ(y))=0,

for every , where is an entry of the Jacobian matrix and given as

 Jxy=∂θ(t)Pβ(x)∂η(t)Pβ(y)=∑s∈Sμ(s,x)μ(s,y)p(t)β(s)−1

from Theorem 3. Therefore, we have the update formula for all as

 η(t+1)Pβ(x) =η(t)Pβ(x)−∑\mathclapy∈dom(β)J−1xy(θ(t)Pβ(y)−β(y)).

In -projection, update for while fixing for all . To ensure , we add to and . We update at each step as

 θ(t+1)Pβ(x) =θ(t)Pβ(x)−∑\mathclapy∈dom(β)J′−1xy(η(t)Pβ(y)−β(y)), J′xy =∂η(t)Pβ(x)∂θ(t)Pβ(y)=∑s∈Sζ(x,s)ζ(y,s)p(t)β(s)−|S|η(t)Pβ(x)η(t)Pβ(y).

In this case, we also need to update as it is not guaranteed to be fixed. Let us define

 p′(t+1)β(x) =p(t)β(x)∏\mathclaps∈dom(β)exp(θ(t+1)Pβ(s))exp(θ(t)Pβ(s))ζ(s,x).

Since we have

 p(t+1)β(x) =exp(θ(t+1)Pβ(⊥))exp(θ(t)Pβ(⊥))p′(t+1)β(x),

it follows that

 θ(t+1)Pβ(⊥)−θ(t)Pβ(⊥)=−log(exp(θ(t)Pβ(⊥))+∑x∈S+p′(t+1)β(x)),

The time complexity of each iteration is , which is required to compute the inverse of the Jacobian matrix.

Global convergence of the projection algorithm is always guaranteed by the convexity of a submanifold defined in Equation (14). Since is always convex with respect to the - and -coordinates, it is straightforward to see that our -projection is an instance of the Bregman algorithm onto a convex region, which is well known to always converge to the global solution [Censor and Lent, 1981].

## 5 Balancing Matrices and Tensors

Now we are ready to solve the problem of matrix and tensor balancing as projection on a dually flat manifold.

### 5.1 Matrix Balancing

Recall that the task of matrix balancing is to find that satisfy and with and