I Introduction
Suppose you have an unlabeled repository of MRI, CT, and PET scans of brain regions corresponding to different patients from different stages of the diagnostic process. You wish to sort them into clusters corresponding to individual patients and align the images within each cluster. In this work, we address this exact problem of joint clustering and registration of images using novel multivariate information functionals.
Image registration is the task of geometrically aligning two or more images of the same scene taken at different points in time, from different viewpoints, or by different imaging devices. It is a crucial step in most image processing tasks where a comparative study of the different images is required such as medical diagnosis, target detection, image fusion, change detection, and multimodal image restoration. In such applications it is also essential to classify images of different scenes prior to registering images of the same kind. Thus clustering images according to the scene is also critical to computer vision problems such as object tracking, face tagging, cryoelectron microscopy, and remote sensing.
Different digital images of the same scene can appear significantly different from each other, e.g., consider images of a scene that are negatives of each other. Such factors tend to make clustering and registering images harder. Further, such metadata about the digital images is often not available a priori. This emphasizes the need for universality, i.e., the design of reliable clustering and registration algorithms that work without the specific knowledge of the priors or channel models that govern the image generation.
Image clustering and registration have often been dealt with separately. However, it is easy to see that clustering registered images and registering images within clusters are both relatively easier. Here we emphasize that the two problems are not separate and define universal, reliable, joint clustering and registration algorithms.
Ia Prior Work
There is rich literature on clustering and registration; we describe a nonexhaustive listing of relevant prior work.
Supervised learning for classification has recently gained prominence through deep convolutional neural networks and other machine learning algorithms. However, these methods require vast amounts of costly labeled training data. Thus, unsupervised image classification is of interest.
Unsupervised clustering of objects has been studied under numerous optimality and similarity criteria [2]. The means algorithm and its generalization to Bregman divergences [3] are some popular distancebased methods. Popular techniques for unsupervised image clustering include affinity propagation [4]
[5], independent component analysis
[6], and orthogonal subspace projection [7]. We focus on informationbased clustering algorithms [8, 9, 10], owing to the ubiquitous nature of information functionals in universal information processing. Universal clustering has been studied in the communication and crowdsourcing [11, 12].Separate from clustering, multiimage registration has been studied extensively [13]. Prominent regionbased registration methods include maximum likelihood (ML) [14], minimum KL divergence [15], correlation detection [16], and maximum mutual information (MMI) [17, 18]. Featurebased techniques have also been explored [19].
Lower bounds on mean squared error for image registration in the presence of additive noise using ZivZakai and CramerRao bounds have been explored recently [20],[21]. The MMI decoder was originally developed in universal communication [22]. Deterministic reasons for its effectiveness in image registration have been identified [23]. Correctness has been established through informationtheoretic arguments [24].
A problem closely related to image registration is multireference alignment. There the aim is to denoise a signal from noisy, circularlytranslated versions of itself, under Gaussian or binary noise models [25, 26]. Versions of this problem have been considered for image denoising under Gaussian noise [27]. Unlike denoising, we consider the task of registration alone but for a wider class of noise models under a universal setting.
IB Our Contributions
While MMI has been found to perform well in numerous empirical studies, concrete theoretical guarantees are still lacking. In this work, we extend the framework of universal delay estimation
[28] to derive universal asymptotic optimality guarantees for MMI in registering two images under the Hamming loss, under mild assumptions on the image models.Even though the MMI method is universally asymptotically optimal for registering two images, we show that the method is strictly suboptimal in multiimage registration. We define the max multiinformation (MM) image registration algorithm that uses the multiinformation functional in place of pairwise MMI. We prove that the method is universal and asymptotically optimal using type counting arguments.
Then, we consider the task of joint clustering and registration. We define novel multivariate information functionals to characterize dependence in a collection of images. Under a variety of clustering criteria, we define algorithms using these functionals to perform joint clustering and registration and prove consistency of the methods.
Applications such as cryoelectron microscopy handle a large number of images of several molecular conformations. The task of clustering and registration is critical. With such motivation, we revisit joint clustering and registration under the constraint of limited resolution of images. We define blockwise clustering and registration algorithms, further showing they are orderoptimal in the scaling of resolution with number of pixels in the system.
Ii Model
We now formulate the joint image clustering and registration problem and define the model of images we work with.
Iia Image and Noise
Consider a simple model of images, where each image is a collection of pixels drawn independently and identically from an unknown prior distribution defined on finite space of pixel values . Since the pixels are i.i.d. , we represent the original scene of the image by an
dimensional random vector,
. More specifically, the scene is drawn according to .Consider a finite collection of distinct scenes (drawn i.i.d. according to the prior) . This set can be interpreted as a collection of different scenes. Each image may be viewed as a noisy depiction of an underlying scene.
Consider a collection, , of scenes drawn from , with each scene chosen independently and identically according to the pmf , so image corresponds to a depiction of the scene , for .
We model images corresponding to this collection of scenes as noisy versions of the underlying scenes drawn as follows
(1) 
where is the inclusionwise maximal subset such that for all . That is, images corresponding to the same scene are jointly corrupted by a discrete memoryless channel, while the images corresponding to different scenes are independent conditioned on the scene. Here we assume . The system is depicted in Fig. 1.
Consider any two images generated as above. Let be the conditional distribution that a pixel in the second image is , given the corresponding pixel in the first image is . Let
where are i.i.d. copies of pixel values generated according to the marginal distribution of the first image. Note that if , then with sufficient samples of the image, one can easily identify corresponding pixels in the copy. Hence to avoid the trivial case, we presume there exist finite positive constants such that, for any two images,
(2) 
IiB Image Transformations
Corrupted images are also subject to independent rigidbody transformations such as rotation and translation. Since images are vectors of length , transformations are represented by permutations of . Let be the set of all allowable transformations. We assume is known.
Let be the transformation of image . Then, the final image is , for all . Image transformed by is depicted interchangeably as .
We assume forms a commutative algebra over the composition operator . More specifically,

for , ;

there exists unique s.t. , for all ;

for any , there exists a unique inverse , s.t. .
The number of distinct rigid transformations of images with pixels on the lattice is polynomial in , i.e., for some [29].
Definition 1
A permutation cycle, , is a subset of permutation , such that , for all and .
It is clear from the pigeonhole principle that any permutation is composed of at least one permutation cycle. Let the number of permutation cycles of a permutation be .
Definition 2
Identity block of permutation is the inclusionwise maximal subset such that , for all .
Definition 3
A permutation is simple if , .
Definition 4
Any two permutations are said to be nonoverlapping if for all .
Lemma 1
Let be chosen uniformly at random from the set of all permutations of . Then for any constants ,
(3) 
Proof:
First, we observe that the number of permutations that have an identity block of size at least is given by
Thus, from Stirling’s approximation,
Lengths and number of cycles in a random permutation may be analyzed as detailed in [30]. In particular, we note that for a random permutation , . Using Markov’s inequality, the result follows.
Following Lem. 1, we assume that for any , , i.e., the number of permutation cycles does not grow very fast. Further, let for any .
IiC Performance Metrics
We now introduce formal metrics to quantify performance of the joint clustering and registration algorithms.
Definition 5
A clustering of images is a partition of . The sets of a partition are referred to as clusters. The clustering is said to be correct if
Let be the set of all partitions of . For a given collection of scenes, we represent the correct clustering by .
Definition 6
A partition is finer than , , if . Similarly, a partition is denser than , , if .
Definition 7
The correct registration of an image transformed by is .
Definition 8
A universal clustering and registration algorithm is a sequence of functions that are designed in the absence of knowledge of , , and . Here the index corresponds to the number of pixels in each image.
We focus on the loss function to quantify performance.
Definition 9
Definition 10
Alg. is asymptotically consistent if , and is exponentially consistent if .
Definition 11
The error exponent of an algorithm is
(5) 
We use to denote when clear from context.
Iii Registration of two images
We first consider the problem of registering two images, i.e., , . Thus the problem reduces to registering an image obtained as a result of transforming the output of an equivalent discrete memoryless channel , given input image (reference) . The channel model is depicted in Fig. 2.
This problem has been wellstudied in practice and a heuristic method used is the MMI method defined as
(6) 
where is the mutual information of the empirical distribution where is the indicator function. Note that the MMI method is universal.
Since transformations are chosen uniformly at random, the maximum likelihood (ML) estimate is Bayes optimal:
(7) 
We first show that the MMI method, and consequently the ML method, are exponentially consistent. We then show that the error exponent of MMI matches that of ML.
Iiia Exponential Consistency
The empirical mutual information of i.i.d. samples is asymptotically exponentially consistent.
Theorem 1
Let
be random samples drawn i.i.d. according to the joint distribution
defined on the finite space . For fixed alphabet sizes, , the ML estimate of entropy and mutual information are asymptotically exponentially consistent and satisfy(8) 
(9) 
where , .
Proof:
The proof is given in [31, Lem. 7].
Theorem 2
MMI and ML are exponentially consistent.
Proof:
Let and let the correct registration be . Then,
(10)  
(11)  
(12) 
where (10), (11), and (12) follow from the union bound, the triangle inequality, and (9), respectively. Here is a constant.
Thus MMI is exponentially consistent as . Finally, and thus, the ML estimate is also exponentially consistent.
Thm. 2 implies there exists such that
IiiB Whittle’s Law and Markov Types
We now summarize a few results on the number of types and Markov types which are eventually used to analyze the error exponent of image registration.
Consider a sequence . The empirical distribution of is the type of the sequence. Let
be a dummy random variable. Let
be the set of all sequences of length , of type . The number of possible types of sequences of length is polynomial in , i.e., [32].The number of sequences of length , of type , is
From bounds on multinomial coefficients, the number of sequences of length and type is bounded as [32]
(13) 
Consider a Markov chain defined on the space
. Given a sequence of samples from we can compute the matrix of transition counts, where corresponds to the number of transitions from state to state . By Whittle’s formula [33], the number of sequences with , with and is(14) 
where corresponds to the th cofactor of the matrix with
The firstorder Markov type of a sequence is defined as the empirical distribution , given by
Here we assume that the sequence is cyclic with period , i.e., for any . Let . Then, from (14), the set of sequences of type , , satisfies
From the definition of , we can bound the trace of the cofactor matrix of as
Again using the bounds on multinomial coefficients, we have
(15) 
The joint firstorder Markov type of a pair of sequences , is the empirical distribution
Then given , the set of conditional firstorder Markov type sequences, satisfies [28]
(16) 
Lemma 2
Let be any two nonoverlapping permutations and let be the corresponding permutations of . Let be a simple permutation. Then, for every , there exists , such that
Proof:
Since permutations are nonoverlapping, there is a bijection from to , where . Specifically, consider the permutation defined iteratively as , with . Then, for any , the sequence . Further, this map is invertible and so the sets are of equal size.
Result for conditional types follows mutatis mutandis.
Lem. 2 implies and satisfy (15) and (16) respectively. We now show that the result of Lem. 2 can be extended to any two permutations .
Lemma 3
Let . For any ,
Proof:
Let and . For let the length of permutation cycle of be for . Further, . Let be the identity block of and let . Then we have the decomposition
(17) 
for all . Here, is the firstorder Markov type defined on the th permutation cycle of and is the zerothorder Markov type corresponding to the identity block of .
From (17), we see that given a valid decomposition of types , the number of sequences can be computed as a product of the number of subsequences of each type, i.e.,
where the sum is over all valid decompositions in (17).
Additionally, from Lem. 2 we know the number of valid subsequences of each type. Let be the marginal corresponding to the firstorder Markov type .
Thus, we upper bound the size of the set as
(18)  
(19) 
where
the maximum taken over all valid decompositions in (17). Here, (18) follows from (13) and (15), and (19) follows since the total number of possible types is polynomial in the length of the sequence. Since ,
Now, let , for all , and . And let . Since , using Jensen’s inequality, we have
Thus,
To obtain the lower bound, we note that the total number of sequences is at least the number of sequences obtained from any single valid decomposition. Thus from (13) and (15),
Now, for large , consider for some . Any other cycle of smaller length contributes to the exponent due to Lem. 1. One valid decomposition of (17) is to have , for all . However, the lengths of the subsequences are different and may not be a valid type of the corresponding length. Nevertheless, for each , there exists a type such that , where is the total variational distance. Further, entropy is continuous in the distribution [34] and satisfies
This in turn indicates that
Similar decomposition follows for conditional types as well.
IiiC Error Analysis
We are interested in the error exponent of MMIbased image registration, in comparison to ML. We first note that the error exponent of the problem is characterized by the pair of transformations that are the hardest to compare.
Define as the binary hypothesis testing problem corresponding to image registration when the allowed transformations are only . Let be the corresponding error probability and error exponent.
Lemma 4
Let be an asymptotically exponentially consistent estimator. Then,
(20) 
Proof:
Let be the estimate output by and be the correct registration. We first upper bound the error probability as
(21)  
where (21) follows from the union bound.
Additionally, we have
Finally, since , the result follows.
Thus, it suffices to consider the binary hypothesis tests to study the error exponent of image registration.
Theorem 3
Let . Then,
(22) 
Proof:
Probabilities of i.i.d. sequences are defined by their joint type. Thus, we have
(23) 
where the summation in (23) is over the set of all joint types of sequences of length and is the number of sequences of length with joint type such that the MMI algorithm makes a decision error.
If a sequence is in error, then all sequences in are in error as is drawn from an i.i.d. source. Now, we can decompose the number of sequences under error as follows
where the sum is taken over such that there is a decision error. The final line follows from the fact that given the joint firstorder Markov type, the size of the conditional type is independent of the exact sequence .
Finally, we have
(24) 
The result then follows from the forthcoming Lem. 5.
Lemma 5
(25) 
Proof:
Observe that for images with i.i.d. pixels, MMI is the same as minimizing the joint entropy, i.e.,
Theorem 4
(26) 
Thus, we can see that using MMI for image registration is not only universal, but also asymptotically optimal. We next study the problem of image registration for multiple images.
Iv Multiimage Registration
Having universally registered two images, we now consider aligning multiple copies of the same image. For simplicity, let us consider aligning three images; ; results can directly be extended to any finite number of images. Let be the source image and be the noisy, transformed versions to be aligned as shown in Fig. 3.
Here, the ML estimates are
(27) 
Iva MMI is not optimal
We know MMI is asymptotically optimal at aligning two images. Is pairwise MMI, i.e.
(28) 
optimal for multiimage registration? We show pairwise MMI is suboptimal even though individual transformations are chosen independently and uniformly from .
Theorem 5
There exists channel and prior, such that pairwise MMI is suboptimal for multiimage registration.
Proof:
Let . Consider physically degraded images obtained as outputs of the channel . This is depicted in Fig. 4.
Naturally, the ML estimate of the transformations is obtained by registering image to and subsequently registering to , instead of registering each of the images pairwise to . That is, from (27)
It is evident that is estimated based on the estimate of .
Let be the error exponent of for the physicallydegraded channel. Since the ML estimate involves registration of to and to , the error exponent is .
Let be the error exponent of MMI. Then, error exponent of pairwise MMI is . We know MMI is asymptotically optimal for two image registration and so
Comments
There are no comments yet.