Universal Joint Image Clustering and Registration using Partition Information

We consider the problem of universal joint clustering and registration of images and define algorithms using multivariate information functionals. We first study registering two images using maximum mutual information and prove its asymptotic optimality. We then show the shortcomings of pairwise registration in multi-image registration, and design an asymptotically optimal algorithm based on multiinformation. Further, we define a novel multivariate information functional to perform joint clustering and registration of images, and prove consistency of the algorithm. Finally, we consider registration and clustering of numerous limited-resolution images, defining algorithms that are order-optimal in scaling of number of pixels in each image with the number of images.

Authors

• 8 publications
• 51 publications
• DRMIME: Differentiable Mutual Information and Matrix Exponential for Multi-Resolution Image Registration

In this work, we present a novel unsupervised image registration algorit...
01/27/2020 ∙ by Abhishek Nan, et al. ∙ 16

• Registration of Images with Outliers Using Joint Saliency Map

Mutual information (MI) is a popular similarity measure for image regist...
03/29/2013 ∙ by Binjie Qin, et al. ∙ 0

• Higher-order Spatial Accuracy in Diffeomorphic Image Registration

We discretize a cost functional for image registration problems by deriv...
12/23/2014 ∙ by Henry O. Jacobs, et al. ∙ 0

• BW - Eye Ophthalmologic decision support system based on clinical workflow and data mining techniques-image registration algorithm

Blueworks - Medical Expert Diagnosis is developing an application, BWEye...
12/17/2013 ∙ by Ricardo Martins, et al. ∙ 0

• A Multicomponent Approach to Nonrigid Registration of Diffusion Tensor Images

We propose a nonrigid registration approach for diffusion tensor images ...
04/08/2015 ∙ by Mohammed Khader, et al. ∙ 0

• Distributed High Dimensional Information Theoretical Image Registration via Random Projections

Information theoretical measures, such as entropy, mutual information, a...
10/02/2012 ∙ by Zoltan Szabo, et al. ∙ 0

• Finite-Sample Analysis of Image Registration

We study the problem of image registration in the finite-resolution regi...
01/12/2020 ∙ by Ravi Kiran Raman, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Suppose you have an unlabeled repository of MRI, CT, and PET scans of brain regions corresponding to different patients from different stages of the diagnostic process. You wish to sort them into clusters corresponding to individual patients and align the images within each cluster. In this work, we address this exact problem of joint clustering and registration of images using novel multivariate information functionals.

Image registration is the task of geometrically aligning two or more images of the same scene taken at different points in time, from different viewpoints, or by different imaging devices. It is a crucial step in most image processing tasks where a comparative study of the different images is required such as medical diagnosis, target detection, image fusion, change detection, and multimodal image restoration. In such applications it is also essential to classify images of different scenes prior to registering images of the same kind. Thus clustering images according to the scene is also critical to computer vision problems such as object tracking, face tagging, cryo-electron microscopy, and remote sensing.

Different digital images of the same scene can appear significantly different from each other, e.g., consider images of a scene that are negatives of each other. Such factors tend to make clustering and registering images harder. Further, such meta-data about the digital images is often not available a priori. This emphasizes the need for universality, i.e., the design of reliable clustering and registration algorithms that work without the specific knowledge of the priors or channel models that govern the image generation.

Image clustering and registration have often been dealt with separately. However, it is easy to see that clustering registered images and registering images within clusters are both relatively easier. Here we emphasize that the two problems are not separate and define universal, reliable, joint clustering and registration algorithms.

I-a Prior Work

There is rich literature on clustering and registration; we describe a non-exhaustive listing of relevant prior work.

Supervised learning for classification has recently gained prominence through deep convolutional neural networks and other machine learning algorithms. However, these methods require vast amounts of costly labeled training data. Thus, unsupervised image classification is of interest.

Unsupervised clustering of objects has been studied under numerous optimality and similarity criteria [2]. The -means algorithm and its generalization to Bregman divergences [3] are some popular distance-based methods. Popular techniques for unsupervised image clustering include affinity propagation [4]

[5][6], and orthogonal subspace projection [7]. We focus on information-based clustering algorithms [8, 9, 10], owing to the ubiquitous nature of information functionals in universal information processing. Universal clustering has been studied in the communication and crowdsourcing [11, 12].

Separate from clustering, multi-image registration has been studied extensively [13]. Prominent region-based registration methods include maximum likelihood (ML) [14], minimum KL divergence [15], correlation detection [16], and maximum mutual information (MMI) [17, 18]. Feature-based techniques have also been explored [19].

Lower bounds on mean squared error for image registration in the presence of additive noise using Ziv-Zakai and Cramer-Rao bounds have been explored recently [20],[21]. The MMI decoder was originally developed in universal communication [22]. Deterministic reasons for its effectiveness in image registration have been identified [23]. Correctness has been established through information-theoretic arguments [24].

A problem closely related to image registration is multireference alignment. There the aim is to denoise a signal from noisy, circularly-translated versions of itself, under Gaussian or binary noise models [25, 26]. Versions of this problem have been considered for image denoising under Gaussian noise [27]. Unlike denoising, we consider the task of registration alone but for a wider class of noise models under a universal setting.

I-B Our Contributions

While MMI has been found to perform well in numerous empirical studies, concrete theoretical guarantees are still lacking. In this work, we extend the framework of universal delay estimation

[28] to derive universal asymptotic optimality guarantees for MMI in registering two images under the Hamming loss, under mild assumptions on the image models.

Even though the MMI method is universally asymptotically optimal for registering two images, we show that the method is strictly suboptimal in multi-image registration. We define the max multiinformation (MM) image registration algorithm that uses the multiinformation functional in place of pairwise MMI. We prove that the method is universal and asymptotically optimal using type counting arguments.

Then, we consider the task of joint clustering and registration. We define novel multivariate information functionals to characterize dependence in a collection of images. Under a variety of clustering criteria, we define algorithms using these functionals to perform joint clustering and registration and prove consistency of the methods.

Applications such as cryo-electron microscopy handle a large number of images of several molecular conformations. The task of clustering and registration is critical. With such motivation, we revisit joint clustering and registration under the constraint of limited resolution of images. We define blockwise clustering and registration algorithms, further showing they are order-optimal in the scaling of resolution with number of pixels in the system.

Ii Model

We now formulate the joint image clustering and registration problem and define the model of images we work with.

Ii-a Image and Noise

Consider a simple model of images, where each image is a collection of pixels drawn independently and identically from an unknown prior distribution defined on finite space of pixel values . Since the pixels are i.i.d. , we represent the original scene of the image by an

-dimensional random vector,

. More specifically, the scene is drawn according to .

Consider a finite collection of distinct scenes (drawn i.i.d. according to the prior) . This set can be interpreted as a collection of different scenes. Each image may be viewed as a noisy depiction of an underlying scene.

Consider a collection, , of scenes drawn from , with each scene chosen independently and identically according to the pmf , so image corresponds to a depiction of the scene , for .

We model images corresponding to this collection of scenes as noisy versions of the underlying scenes drawn as follows

 (1)

where is the inclusion-wise maximal subset such that for all . That is, images corresponding to the same scene are jointly corrupted by a discrete memoryless channel, while the images corresponding to different scenes are independent conditioned on the scene. Here we assume . The system is depicted in Fig. 1.

Consider any two images generated as above. Let be the conditional distribution that a pixel in the second image is , given the corresponding pixel in the first image is . Let

 Δ=D(~W(Y|X1)∥~W(Y|X2)|X1,X2),

where are i.i.d. copies of pixel values generated according to the marginal distribution of the first image. Note that if , then with sufficient samples of the image, one can easily identify corresponding pixels in the copy. Hence to avoid the trivial case, we presume there exist finite positive constants such that, for any two images,

 0<θm≤Δ≤θM<∞. (2)

Ii-B Image Transformations

Corrupted images are also subject to independent rigid-body transformations such as rotation and translation. Since images are vectors of length , transformations are represented by permutations of . Let be the set of all allowable transformations. We assume is known.

Let be the transformation of image . Then, the final image is , for all . Image transformed by is depicted interchangeably as .

We assume forms a commutative algebra over the composition operator . More specifically,

1. for , ;

2. there exists unique s.t. , for all ;

3. for any , there exists a unique inverse , s.t. .

The number of distinct rigid transformations of images with pixels on the -lattice is polynomial in , i.e., for some [29].

Definition 1

A permutation cycle, , is a subset of permutation , such that , for all and .

It is clear from the pigeonhole principle that any permutation is composed of at least one permutation cycle. Let the number of permutation cycles of a permutation be .

Definition 2

Identity block of permutation is the inclusion-wise maximal subset such that , for all .

Definition 3

A permutation is simple if , .

Definition 4

Any two permutations are said to be non-overlapping if for all .

Lemma 1

Let be chosen uniformly at random from the set of all permutations of . Then for any constants ,

 P[|Iπ|>cn]≲exp(−cn),P[κπ>Cnlogn]=o(1). (3)
Proof:

First, we observe that the number of permutations that have an identity block of size at least is given by

 νc ≤(ncn)((1−c)n)!=n!(cn)!.

Thus, from Stirling’s approximation,

 P[|Iπ|≥cn] ≤1√2πexp(−(cn+12)log(cn)+cn).

Lengths and number of cycles in a random permutation may be analyzed as detailed in [30]. In particular, we note that for a random permutation , . Using Markov’s inequality, the result follows.

Following Lem. 1, we assume that for any , , i.e., the number of permutation cycles does not grow very fast. Further, let for any .

Ii-C Performance Metrics

We now introduce formal metrics to quantify performance of the joint clustering and registration algorithms.

Definition 5

A clustering of images is a partition of . The sets of a partition are referred to as clusters. The clustering is said to be correct if

 i,j∈C⇔R(i)=R(j), for all i,j∈[m], C∈P.

Let be the set of all partitions of . For a given collection of scenes, we represent the correct clustering by .

Definition 6

A partition is finer than , , if . Similarly, a partition is denser than , , if .

Definition 7

The correct registration of an image transformed by is .

Definition 8

A universal clustering and registration algorithm is a sequence of functions that are designed in the absence of knowledge of , , and . Here the index corresponds to the number of pixels in each image.

We focus on the -loss function to quantify performance.

Definition 9

The

error probability

of an algorithm that outputs , is

 Pe(Φ(n)) =P[{∪i∈[m]{^πi≠π−1i}}∪{^P≠P∗}]. (4)
Definition 10

Alg.  is asymptotically consistent if , and is exponentially consistent if .

Definition 11

The error exponent of an algorithm is

 E(Φ(n))=limn→∞−1nlogPe(Φ(n)). (5)

We use to denote when clear from context.

Iii Registration of two images

We first consider the problem of registering two images, i.e., , . Thus the problem reduces to registering an image obtained as a result of transforming the output of an equivalent discrete memoryless channel , given input image (reference) . The channel model is depicted in Fig. 2.

This problem has been well-studied in practice and a heuristic method used is the MMI method defined as

 ^πMMI=argmaxπ∈Π^I(X;Yπ), (6)

where is the mutual information of the empirical distribution where is the indicator function. Note that the MMI method is universal.

Since transformations are chosen uniformly at random, the maximum likelihood (ML) estimate is Bayes optimal:

 ^πML=argmaxπ∈Πn∏i=1W(Yπ(i)|Xi). (7)

We first show that the MMI method, and consequently the ML method, are exponentially consistent. We then show that the error exponent of MMI matches that of ML.

Iii-a Exponential Consistency

The empirical mutual information of i.i.d. samples is asymptotically exponentially consistent.

Theorem 1

Let

be random samples drawn i.i.d. according to the joint distribution

defined on the finite space . For fixed alphabet sizes, , the ML estimate of entropy and mutual information are asymptotically exponentially consistent and satisfy

 P[|^H(X)−H(X)|>ϵ]≤(n+1)|X|e−cnϵ4, (8)
 P[|^I(X;Y)−I(X;Y)|>ϵ]≤3(n+1)|X||Y|e−~cnϵ4, (9)

where , .

Proof:

The proof is given in [31, Lem. 7].

Theorem 2

MMI and ML are exponentially consistent.

Proof:

Let and let the correct registration be . Then,

 Pe(ΦMMI)=P[^πMMI≠π∗] ≤∑π∈ΠP[^I(X;Yπ)>^I(X;~Y)] (10) ≤∑π∈ΠP[|^I(X;Yπ)−(^I(X;~Y)−I(X;~Y))|>I(X;~Y)] ≤∑π∈ΠP[^I(X;Yπ)+|^I(X;~Y)−I(X;~Y)|>I(X;~Y)] (11) ≤2|Π|exp{−Cn(I(X;~Y))4}, (12)

where (10), (11), and (12) follow from the union bound, the triangle inequality, and (9), respectively. Here is a constant.

Thus MMI is exponentially consistent as . Finally, and thus, the ML estimate is also exponentially consistent.

Thm. 2 implies there exists such that

 E(ΦML)≥E(ΦMMI)≥ϵ.

Iii-B Whittle’s Law and Markov Types

We now summarize a few results on the number of types and Markov types which are eventually used to analyze the error exponent of image registration.

Consider a sequence . The empirical distribution of is the type of the sequence. Let

be a dummy random variable. Let

be the set of all sequences of length , of type . The number of possible types of sequences of length is polynomial in , i.e., [32].

The number of sequences of length , of type , is

 |TnX|=n!∏a∈X(nqX(a))!.

From bounds on multinomial coefficients, the number of sequences of length and type is bounded as [32]

 (n+1)−|X|2nH(X)≤|TnX|≤2nH(X). (13)

Consider a Markov chain defined on the space

. Given a sequence of samples from we can compute the matrix of transition counts, where corresponds to the number of transitions from state to state . By Whittle’s formula [33], the number of sequences with , with and is

 N(n)uv(F)=∏i∈[k](∑j∈[k]Fij)!∏j∈[k]Fij!G∗vu, (14)

where corresponds to the th cofactor of the matrix with

 gij=1{i=j}−Fij∑j∈[k]Fij.

The first-order Markov type of a sequence is defined as the empirical distribution , given by

 qX0,X1(a0,a1)=1nn∑i=11{(xi,xi+1)=(a0,a1)}.

Here we assume that the sequence is cyclic with period , i.e., for any . Let . Then, from (14), the set of sequences of type , , satisfies

 |TnX0,X1|=(∑a∈XG∗a,a)∏a0∈X(nq0(a0))!∏a1∈X(nqx0,x1(a0,a1))!.

From the definition of , we can bound the trace of the cofactor matrix of as

 |X|(n+1)|X|≤∑a∈XG∗a,a≤|X|.

Again using the bounds on multinomial coefficients, we have

 |X|(n+1)−(|X|2+|X|)2n(H(X0,X1)−H(X0)) ≤|TnX0,X1|≤|X|2n(H(X0,X1)−H(X0)). (15)

The joint first-order Markov type of a pair of sequences , is the empirical distribution

 qX0,X1,Y(a0,a1,b)=1nn∑i=11{(xi,xi+1,yi)=(a0,a1,b)}.

Then given , the set of conditional first-order Markov type sequences, satisfies [28]

 (n+1)−|X|2|Y|2n(H(X0,X1,Y)−H(X0,X1)) ≤|TnY|X0,X1(x)|≤2n(H(X0,X1,Y)−H(X0,X1)). (16)
Lemma 2

Let be any two non-overlapping permutations and let be the corresponding permutations of . Let be a simple permutation. Then, for every , there exists , such that

 |TnXπ1,Xπ2|=|TnX0,X1|,|TnY|Xπ1,Xπ2(x)|=|TnY|X0,X1(~x)|.
Proof:

Since permutations are non-overlapping, there is a bijection from to , where . Specifically, consider the permutation defined iteratively as , with . Then, for any , the sequence . Further, this map is invertible and so the sets are of equal size.

Result for conditional types follows mutatis mutandis.

Lem. 2 implies and satisfy (15) and (16) respectively. We now show that the result of Lem. 2 can be extended to any two permutations .

Lemma 3

Let . For any ,

 |TnXπ1,Xπ2|=2n(H(qXπ1,Xπ2)−H(qX)+o(1)).
Proof:

Let and . For let the length of permutation cycle of be for . Further, . Let be the identity block of and let . Then we have the decomposition

 qXπ1,Xπ2(a0,a1)=κ∑i=1αiqi(a0,a1)+γqI(a0,a1), (17)

for all . Here, is the first-order Markov type defined on the th permutation cycle of and is the zeroth-order Markov type corresponding to the identity block of .

From (17), we see that given a valid decomposition of types , the number of sequences can be computed as a product of the number of subsequences of each type, i.e.,

 |TnXπ1,Xπ2|=∑|TγnqI|κ∏i=1|Tαinqi|,

where the sum is over all valid decompositions in (17).

Additionally, from Lem. 2 we know the number of valid subsequences of each type. Let be the marginal corresponding to the first-order Markov type .

Thus, we upper bound the size of the set as

 |TnXπ1,Xπ2|≤∑2γnH(qI)κ∏i=1|X|2αin(H(qi)−H(q′i)) (18) ≤|X|κ(γn+1)|X|κ∏i=1(αin+1)|X|22nM(qXπ1,Xπ2), (19)

where

 M(qXπ1,Xπ2)=maxγH(qI)+κ∑i=1αi(H(qi)−H(q′i)),

the maximum taken over all valid decompositions in (17). Here, (18) follows from (13) and (15), and (19) follows since the total number of possible types is polynomial in the length of the sequence. Since ,

 |TnXπ1,Xπ2|≤2n(M(qXπ1,Xπ2)+o(1)).

Now, let , for all , and . And let . Since , using Jensen’s inequality, we have

 κ∑i=1αi(H(qi)−H(q′i)) =κ∑i=1αi[log(|X|)−D(qi∥q′′i)] ≤log(|X|)−D(q∥q′′) =H(q)−H(q′).

Thus,

 |TnXπ1,Xπ2|≤2n(H(q)−H(q′)+o(1)).

To obtain the lower bound, we note that the total number of sequences is at least the number of sequences obtained from any single valid decomposition. Thus from (13) and (15),

 |TnXπ1,Xπ2|≥2n(M(qXπ1,Xπ2)+o(1)).

Now, for large , consider for some . Any other cycle of smaller length contributes to the exponent due to Lem. 1. One valid decomposition of (17) is to have , for all . However, the lengths of the subsequences are different and may not be a valid type of the corresponding length. Nevertheless, for each , there exists a type such that , where is the total variational distance. Further, entropy is continuous in the distribution [34] and satisfies

 |H(qi)−H(q)|≤|X|αinlog(αin)=o(1).

This in turn indicates that

 |TnXπ1,Xπ2|≥2n(H(q)−H(q′)+o(1)).

Similar decomposition follows for conditional types as well.

Iii-C Error Analysis

We are interested in the error exponent of MMI-based image registration, in comparison to ML. We first note that the error exponent of the problem is characterized by the pair of transformations that are the hardest to compare.

Define as the binary hypothesis testing problem corresponding to image registration when the allowed transformations are only . Let be the corresponding error probability and error exponent.

Lemma 4

Let be an asymptotically exponentially consistent estimator. Then,

 E(Φ)=minπ,π′∈ΠEπ,π′(Φ). (20)
Proof:

Let be the estimate output by and be the correct registration. We first upper bound the error probability as

 Pe(Φ) =P[^π≠π∗]≤∑π≠π′1|Π|P[^π=π|π∗=π′] (21) ≤2|Π|∑π≠π′Pπ,π′(Φ),

where (21) follows from the union bound.

 Pe(Φ) =1|Π|∑π∈Π∑π′∈Π∖{π}P[^π=π′|π∗=π] ≥1|Π|maxπ,π′∈ΠPπ,π′(Φ).

Finally, since , the result follows.

Thus, it suffices to consider the binary hypothesis tests to study the error exponent of image registration.

Theorem 3

Let . Then,

 limn→∞Pπ1,π2(ΦMMI)Pπ1,π2(ΦML)=1. (22)
Proof:

Probabilities of i.i.d. sequences are defined by their joint type. Thus, we have

 Pπ1,π2(ΦMMI) =P[^πMMI≠π∗] =∑x∈Xn∑y∈YnP[x,y]1{^πMMI≠π∗} =∑q(∏a∈X∏b∈Y(P[a,b])nq(a,b))νMMI(q), (23)

where the summation in (23) is over the set of all joint types of sequences of length and is the number of sequences of length with joint type such that the MMI algorithm makes a decision error.

If a sequence is in error, then all sequences in are in error as is drawn from an i.i.d. source. Now, we can decompose the number of sequences under error as follows

 ν(q) =∑x∈Xn∑TnY|Xπ1,Xπ2⊆TnY|X:error|TnY|Xπ1,Xπ2(x)| =∑TnXπ1,Xπ2⊆TnX|TXπ1,Xπ2|∑|TnY|Xπ1,Xπ2|,

where the sum is taken over such that there is a decision error. The final line follows from the fact that given the joint first-order Markov type, the size of the conditional type is independent of the exact sequence .

Finally, we have

 Pπ1,π2(ΦMMI) =∑q∏a∈X∏b∈Y(P[a,b])nq(a,b)νML(q)[ν% MMI(q)νML(q)] ≤Pπ1,π2(ΦML)maxq{νMMI(q)νML(q)}. (24)

The result then follows from the forthcoming Lem. 5.

Lemma 5
 limn→∞maxq{νMMI(q)ν% ML(q)}=1. (25)
Proof:

Observe that for images with i.i.d. pixels, MMI is the same as minimizing the joint entropy, i.e.,

 maxπ∈Π^I(X;π(Y)) =maxπ∈Π^H(X)+^H(π(Y))−^H(X,π(Y)) =^H(X)+^H(Y)−minπ∈Π^H(X,π(Y)).

Further we know that there is a bijective mapping between sequences corresponding to the permutations and the sequences of the corresponding first-order Markov type from Lem. 2 and 3. Thus, the result follows from [28, Lem. 1].

Theorem 4
 E(ΦMMI)=E(ΦML). (26)
Proof:

This follows from Thm. 3 and Lem. 4.

Thus, we can see that using MMI for image registration is not only universal, but also asymptotically optimal. We next study the problem of image registration for multiple images.

Iv Multi-image Registration

Having universally registered two images, we now consider aligning multiple copies of the same image. For simplicity, let us consider aligning three images; ; results can directly be extended to any finite number of images. Let be the source image and be the noisy, transformed versions to be aligned as shown in Fig. 3.

Here, the ML estimates are

 (^πY,ML,^πZ,ML)=argmaxπ1,π2n∏i=1W(Yπ1(i),Zπ2(i)|Xi). (27)

Iv-a MMI is not optimal

We know MMI is asymptotically optimal at aligning two images. Is pairwise MMI, i.e.

 ^πY=argmaxπ∈Π^I(X;Yπ), ^πZ=argmaxπ∈Π^I(X;Zπ), (28)

optimal for multi-image registration? We show pairwise MMI is suboptimal even though individual transformations are chosen independently and uniformly from .

Theorem 5

There exists channel and prior, such that pairwise MMI is suboptimal for multi-image registration.

Proof:

Let . Consider physically degraded images obtained as outputs of the channel . This is depicted in Fig. 4.

Naturally, the ML estimate of the transformations is obtained by registering image to and subsequently registering to , instead of registering each of the images pairwise to . That is, from (27)

 (^πY,ML,^πZ,ML) =argmax(π1,π2)∈Π2n∏i=1W1(Yπ1(i)|Xi)W2(Zπ2(i)|Yπ1(i)).

It is evident that is estimated based on the estimate of .

Let be the error exponent of for the physically-degraded channel. Since the ML estimate involves registration of to and to , the error exponent is .

Let be the error exponent of MMI. Then, error exponent of pairwise MMI is . We know MMI is asymptotically optimal for two image registration and so