Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation

10/28/2018 ∙ by Liwei Wang, et al. ∙ Peking University cornell university Stanford University 0

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks. Although highly intuitive, there is a lack of theory and systematic approach quantitatively characterizing what representations do deep neural networks learn. In this work, we move a tiny step towards a theory and better understanding of the representations. Specifically, we study a simpler problem: How similar are the representations learned by two networks with identical architecture but trained from different initializations. We develop a rigorous theory based on the neuron activation subspace match model. The theory gives a complete characterization of the structure of neuron activation subspace matches, where the core concepts are maximum match and simple match which describe the overall and the finest similarity between sets of neurons in two networks respectively. We also propose efficient algorithms to find the maximum match and simple matches. Finally, we conduct extensive experiments using our algorithms. Experimental results suggest that, surprisingly, representations learned by the same convolutional layers of networks trained from different initializations are not as similar as prevalently expected, at least in terms of subspace match.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

It is widely believed that learning good representations is one of the main reasons for the success of deep neural networks (Krizhevsky et al., 2012; He et al., 2016). Taking CNN as an example, filters, shared weights, pooling and composition of layers are all designed to learn good representations of images. Although highly intuitive, it is still illusive what representations do deep neural networks learn.

In this work, we move a tiny step towards a theory and a systematic approach that characterize the representations learned by deep nets. In particular, we consider a simpler problem: How similar are the representations learned by two networks with identical architecture but trained from different initializations. It is observed that training the same neural network from different random initializations frequently yields similar performance (Dauphin et al., 2014). A natural question arises: do the differently-initialized networks learn similar representations as well, or do they learn totally distinct representations, for example describing the same object from different views? Moreover, what is the granularity of similarity: do the representations exhibit similarity in a local manner, i.e. a single neuron is similar to a single neuron in another network, or in a distributed manner, i.e. neurons aggregate to clusters that collectively exhibit similarity? The questions are central to the understanding of the representations learned by deep neural networks, and may shed light on the long-standing debate about whether network representations are local or distributed.

Li et al. (2016)

studied these questions from an empirical perspective. Their approach breaks down the concept of similarity into one-to-one mappings, one-to-many mappings and many-to-many mappings, and probes each kind of mappings by ad-hoc techniques. Specifically, they applied linear correlation and mutual information analysis to study one-to-one mappings, and found that some core representations are shared by differently-initialized networks, but some rare ones are not; they applied a sparse weighted LASSO model to study one-to-many mappings and found that the whole correspondence can be decoupled to a series of correspondences between smaller neuron clusters; and finally they applied a spectral clustering algorithm to find many-to-many mappings.

Although Li et al. (2016)

provide interesting insights, their approach is somewhat heuristic, especially for one-to-many mappings and many-to-many mappings. We argue that a systematic investigation may deliver a much more thorough comprehension. To this end, we develop a rigorous theory to study the questions. We begin by modeling the similarity between neurons as the matches of subspaces spanned by activation vectors of neurons. The activation vector

(Raghu et al., 2017) shows the neuron’s responses over a finite set of inputs, acting as the representation of a single neuron.222Li et al. (2016) also implicitly used the activation vector as the neuron’s representation. Compared with other possible representations such as the weight vector, the activation vector characterizes the essence of the neuron as an input-output function, and takes into consideration the input distribution. Further, the representation of a neuron cluster is represented by the subspace spanned by activation vectors of neurons in the cluster. The subspace representations derive from the fact that activations of neurons are followed by affine transformations; two neuron clusters whose activations differ up to an affine transformation are essentially learning the same representations.

In order to develop a thorough understanding of the similarity between clusters of neurons, we give a complete characterization of the structure of the neuron activation subspace matches. We show the unique existence of the maximum match, and we prove the Decomposition Theorem: every match can be decomposed as the union of a set of simple matches, where simple matches are those which cannot be decomposed any more. The maximum match characterizes the whole similarity, while simple matches represent minimal units of similarity, collectively giving a complete characterization. Furthermore, we investigate how to characterize these simple matches so that we can develop efficient algorithms for finding them.

Finally, we conduct extensive experiments using our algorithms. We analyze the size of the maximum match and the distribution of sizes of simple matches. It turns out, contrary to prevalently expected, representations learned by almost all convolutional layers exhibit very low similarity in terms of matches. We argue that this observation reflects the current understanding of learning representation is limited.

Our contributions are summarized as follows.

  1. We develop a theory based on the neuron activation subspace match model to study the similarity between representations learned by two networks with identical architecture but trained from different initializations. We give a complete analysis for the structure of matches.

  2. We propose efficient algorithms for finding the maximum match and the simple matches, which are the central concepts in our theory.

  3. Experimental results demonstrate that representations learned by most convolutional layers exhibit low similarity in terms of subspace match.

The rest of the paper is organized as follows. In Section 2 we formally describe the neuron activation subspace match model. Section 3 will present our theory of neuron activation subspace match. Based on the theory, we propose algorithms in Section 4. In Section 5 we will show experimental results and make analysis. Finally, Section 6 concludes. Due to the limited space, all proofs are given in the supplementary.

2 Preliminaries

In this section, we will formally describe the neuron activation subspace match model that will be analyzed throughout this paper. Let and be the set of neurons in the same layer333In this paper we focus on neurons of the same layer. But the method applies to an arbitrary set of nerons. of two networks with identical architecture but trained from different initializations. Suppose the networks are given input data . For , let the output of neuron over be . The representation of a neuron is measured by the activation vector (Raghu et al., 2017) of the neuron over the inputs, . For any subset , we denote the vector set by for short. The representation of a subset of neurons is measured by the subspace spanned by the activation vectors of the neurons therein, . Similarly for . In particular, the representation of an empty subset is , where is the zero vector in .

The reason why we adopt the neuron activation subspace as the representation of a subset of neurons is that activations of neurons are followed by affine transformations. For any neuron in the following layer of , we have , where and are the parameters. Similarly for neuron in the following layer of . If , for any there exists such that , and vice versa. Essentially and receive the same information from either or .

We now give the formal definition of a match.

Definition 1 (-approximate match and exact match).

Let and be two subsets of neurons. , we say forms an -approximate match in , if

  1. ,

  2. .

Here we use the distance: for any vector and any subspace , . We call a -approximate match an exact match. Equivalently, is an exact match if .

3 A Theory of Neuron Activation Subspace Match

In this section, we will develop a theory which gives a complete characterization of the neuron activation subspace match problem. For two sets of neurons in two networks, we show the structure of all the matches in . It turns out that every match can be decomposed as a union of simple matches, where a simple match is an atomic match that cannot be decomposed any further.

Simple match is the most important concept in our theory. If there are many one-to-one simple matches (i.e. ) , it implies that the two networks learn very similar representations at the neuron level. On the other hand, if all the simple matches have very large size (i.e. are both large), it is reasonable to say that the two networks learn different representations, at least in details.

We will give mathematical characterization of the simple matches. This allows us to design efficient algorithms finding out the simple matches (Sec.4). The structures of exact and approximate match are somewhat different. In Section 3.1, we present the simpler case of exact match, and in Section 3.2, we describe the more general -approximate match. Without being explicitly stated, when we say match, we mean -approximate match.

We begin with a lemma stating that matches are closed under union.

Lemma 2 (Union-Close Lemma).

Let and be two -approximate matches in . Then is still an -approximate match.

The fact that matches are closed under union implies that there exists a unique maximum match.

Definition 3 (Maximum Match).

A match in is the maximum match if every match in satisfies and .

The maximum match is simply the union of all matches. In Section 4 we will develop an efficient algorithm that finds the maximum match.

Now we are ready to give a complete characterization of all the matches. First, we point out that there can be exponentially many matches. Fortunately, every match can be represented as the union of some simple matches defined below. The number of simple matches is polynomial for the setting of exact match given and being both linearly independent, and under certain conditions for approximate match as well.

Definition 4 (Simple Match).

A match in is a simple match if is non-empty and there exist no matches in such that

  1. ;

  2. .

With the concept of the simple matches, we will show the Decomposition Theorem: every match can be decomposed as the union of a set of simple matches. Consequently, simple matches fully characterize the structure of matches.

Theorem 5 (Decomposition Theorem).

Every match in can be expressed as a union of simple matches. Formally, there are simple matches satisfying and .

3.1 Structure of Exact Matches

The main goal of this and the next subsection is to understand the simple matches. The definition of simple match only tells us it cannot be decomposed. But how to find the simple matches? How many simple matches exist? We will answer these questions by giving a characterization of the simple match. Here we consider the setting of exact match, which has a much simpler structure than approximate match.

An important property for exact match is that matches are closed under intersection.

Lemma 6 (Intersection-Close Lemma).

Assume and are both linearly independent. Let and be exact matches in . Then, is still an exact match.

It turns out that in the setting of exact match, simple matches can be explicitly characterized by -minimum match defined below.

Definition 7 (-Minimum Match).

Given a neuron , we define the -minimum match to be the exact match in satisfying the following properties:

  1. ;

  2. any exact match in with satisfies and .

Every neuron in the maximum match has a unique -minimum match, which is the intersection of all matches that contain . For a neuron not in the maximum match, there is no -minimum match because there is no match containing .

The following theorem states that the simple matches are exactly -minimum matches.

Theorem 8.

Assume and are both linearly independent. Let be the maximum (exact) match in . , the -minimum match is a simple match, and every simple match is a -minimum match for some neuron .

Theorem 8 implies that the number of simple exact matches is at most linear with respect to the number of neurons given the activation vectors being linearly independent, because the -minimum match for each neuron is unique. We will give a polynomial time algorithm in Section 4 to find out all the -minimum matches.

3.2 Structure of Approximate Matches

The structure of -approximate match is more complicated than exact match. A major difference is that in the setting of approximate matches, the intersection of two matches is not necessarily a match. As a consequence, there is no -minimum match in general. Instead, we have -minimal match.

Definition 9 (-Minimal Match).

-minimal matches are matches in with the following properties:

  1. ;

  2. if a match with and satisfies , then .

Different from the setting of exact match where -minimum match is unique for a neuron , there may be multiple -minimal matches for in the setting of approximate match, and in this setting simple matches can be characterized by -minimal matches instead. Again, for any neuron not in the maximum match , there is no -minimal match because no match contains .

Theorem 10.

Let be the maximum match in . , every -minimal match is a simple match, and every simple match is a -minimal match for some .

Remark 1.

We use the notion -minimal match for . That is, the neuron can be in either networks. We emphasize that this is necessary. Restricting (or ) does not yield Theorem 10 anymore. In other word, -minimal matches for do not represent all simple matches. See Remark A.1 in the Supplementary Material for details.

Remark 2.

One may have the impression that the structure of match is very simple. This is not exactly the case. Here we point out the complicated aspect:

  1. Matches are not closed under the difference operation, even for exact matches. More generally, let and be two matches with . is not necessarily a match.

  2. The decomposition of a match into the union of simple matches is not necessarily unique. See Section C in the Supplementary Material for details.

4 Algorithms

In this section, we will give an efficient algorithm that finds the maximum match. Based on this algorithm, we further give an algorithm that finds all the simple matches, which are precisely the -minimum/minimal matches as shown in the previous section. The algorithm for finding the maximum match is given in Algorithm 1. Initially, we guess the maximum match to be . If there is such that , then we remove from . Similarly, if for some such that cannot be linearly expressed by within error , then we remove from . and are repeatedly updated in this way until no such can be found.

1:
2:
3:while  do
4:     
5:     for  do
6:         if  then
7:              
8:                             
9:     if  then
10:         
11:         for  do
12:              if  then
13:                  
14:                                               
15:return
Algorithm 1
Theorem 11.

Algorithm 1 outputs the maximum match and runs in polynomial time.

Our next algorithm (Algorithm 2) is to output, for a given neuron , the -minimum match (for exact match given the activation vectors being linearly independent) or one -minimal match (for approximate match). The algorithm starts from being the maximum match and iteratively finds a smaller match keeping until further reducing the size of would have to violate .

1:
2:if  then
3:     return “failure”
4:while there exists unchecked do
5:     Pick an unchecked and mark it as checked
6:     if  then
7:         
8:     else
9:               
10:     
11:     if  then
12:               
13:return
Algorithm 2
Theorem 12.

Algorithm 2 outputs one -minimal match for the given neuron . If (exact match), the algorithm outputs the unique -minimum match provided and are both linearly independent. Moreover, the algorithm always runs in polynomial time.

Finally, we show an algorithm (Algorithm 3) that finds all the -minimal matches in time . Here, is the size of the input () and is the number of -minimal matches for neuron . Note that in the setting of (exact match) with and being both linearly independent, we have , so Algorithm 3 runs in polynomial time in this case.

Algorithm 3 finds all the -minimal matches one by one by calling Algorithm 2 in each iteration. To make sure that we never find the same -minimal match twice, we always delete a neuron in every previously-found -minimal match before we start to find the next one.

1:
2:
3:while  do
4:     
5:     Let
6:     while  and there exists unchecked do
7:         Pick the next unchecked and mark it as checked
8:         
9:         for  do
10:              if  then
11:                  
12:              else
13:                                          
14:         if  doesn’t return “failure” then
15:              
16:              
17:                             
18:return
Algorithm 3
Theorem 13.

Algorithm 3 outputs all the different -minimal matches in time . With Algorithm 3, we can find all the simple matches by exploring all based on Theorem 10.

In the worst case, Algorithm 3 is not polynomial time, as is not upper bounded by a constant in general. However, under assumptions we call strong linear independence and stability, we show that Algorithm 3 runs in polynomial time. Specifically, we say satisfies -strong linear independence for if and for any two non-empty disjoint subsets , the angle between and is at least . Here, the angle between two subspaces is defined to be the minimum angle between non-zero vectors in the two subspaces. We define -strong linear independence for similarly. We say and satisfy -stability for and if and . We prove the following theorem.

Theorem 14.

Suppose such that and both satisfy -strong linear independence and -stability. Then, . As a consequence, Algorithm 3 finds all the -minimal matches in polynomial time, and we can find all the simple matches in polynomial time by exploring all based on Theorem 10.

5 Experiments

We conduct experiments on architectures of VGG(Simonyan and Zisserman, 2014) and ResNet (He et al., 2016) on the dataset CIFAR10(Krizhevsky et al., )

and ImageNet

(Deng et al., 2009)

. Here we investigate multiple networks initialized with different random seeds, which achieve reasonable accuracies. Unless otherwise noted, we focus on the neurons activated by ReLU.

The activation vector mentioned in Section 2 is defined as the activations of one neuron over the validation set. For a fully connected layer, , where is the number of images. For a convolutional layer, the activations of one neuron , given the image , is a feature map . We vectorize the feature map as , and thus .

5.1 Maximum Match

We introduce maximum matching similarity to measure the overall similarity between sets of neurons. Given two sets of neurons and , algorithm 1 outputs the maximum match . The maximum matching similarity under is defined as

Here we only study neurons in the same layer of two networks with same architecture but initialized with different seeds. For a convolutional layer, we randomly sample from outputs to form an activation vector for several times, and average the maximal matching similarity.

Different Architecture and Dataset We examine several architectures on different dataset. For each experiment, five differently initialized networks are trained, and the maximal matching similarity is averaged over all the pairs of networks given

. The similarity values show little variance among different pairs, which indicates that this metric reveals a general property of network pairs. The detail of network structures and validation accuracies are listed in the Supplementary Section E.2.

(a) CIFAR10-ResNet18
(b) ImageNet-VGG16
(c) CIFAR10-ResNet34
Figure 1: Maximal matching similarities of different architectures on different datasets under various . The x-axis is along the propagation. (a) shows ResNet18 on CIFAR10 validation set, we leave other classical architectures like VGG in Supplementary material; (b) shows VGG16 on ImageNet validation set; (c) shows a deeper ResNet on CIFAR10.

Figure 3 shows maximal matching similarities of all the layers of different architectures under various . From these results, we make the following conclusions:

  1. For most of the convolutional layers, the maximum match similarity is very low. For deep neural networks, the similarity is almost zero. This is surprising, as it is widely believed that the convolutional layers are trained to extract specific patterns. However, the observation shows that different CNNs (with the same architecture) may learn different intermediate patterns.

  2. Although layers close to the output sometimes exhibit high similarity, it is a simple consequence of their alignment to the output: First, the output vector of two networks must be well aligned because they both achieve high accuracy. Second, it is necessary that the layers before output are similar because if not, after a linear transformation, the output vectors will not be similar. Note that in Fig

    3 (b) layers close to the output do not have similarity. This is because in this experiment the accuracy is relatively low. (See also in Supplementary materials that, for a trained and an untrained networks which have very different accuracies and therefore layers close to output do not have much similarity.)

  3. There is also relatively high similarity of layers close to the input. Again, this is the consequence of their alignment to the same input data as well as the low-dimension nature of the low level layers. More concretely, the fact that each low-level filter contains only a few parameters results in a low dimension space after the transformation; and it is much easier to have high similarity in low dimensional space than in high dimensional space.

5.2 Simple Match

The maximum matching illustrates the overall similarity but does not provide information about the relation of specific neurons. Here we analyze the distribution of the size of simple matches to reveal the finer structure of a layer. Given and two sets of neurons and , algorithm 3 will output all the simple matches.

For more efficient implementation, given , we run the randomized algorithm 2 over each to get one -minimal match for several iterations. The final result is the collection of all the

-minimal matches found (remove duplicated matches) , which we use to estimate the distribution.

Figure 2 shows the distribution of the size of simple matches on layers close to input or output respectively. We make the following observations:

  1. While the layers close to output are similar overall, it seems that they do not show similarity in a local manner. There are very few simple matches with small sizes. It is also an evidence that such similarity is the result of its alignment to the output, rather than intrinsic similar representations.

  2. The layer close to input shows lower similarity in the finer structure. Again, there are few simple matches with small sizes.

In sum, almost no single neuron (or a small set of neurons) learn similar representations, even in layers close to input or output.

(a) Layer close to input
(b) Layer close to output
Figure 2: The distribution of the sizes of minimal matches of layers close to input and output respectively

6 Conclusion

In this paper, we investigate the similarity between representations learned by two networks with identical architecture but trained from different initializations. We develop a rigorous theory and propose efficient algorithms. Finally, we apply the algorithms in experiments and find that representations learned by convolutional layers are not as similar as prevalently expected.

This raises important questions: Does our result imply two networks learn completely different representations, or subspace match is not a good metric for measuring the similarity of representations? If the former is true, we need to rethink not only learning representations, but also interpretability of deep learning. If from each initialization one learns a different representation, how can we interpret the network? If, on the other hand, subspace match is not a good metric, then what is the right metric for similarity of representations? We believe this is a fundamental problem for deep learning and worth systematic and in depth studying.

7 Acknowledgement

This work is supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502), NSFC (61573026) and BJNSF (L172037) and a grant from Microsoft Research Asia.

References

  • Dauphin et al. [2014] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pages 2933–2941, 2014.
  • Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
  • He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 770–778, 2016.
  • [4] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html.
  • Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.

    Imagenet classification with deep convolutional neural networks.

    In Advances in neural information processing systems, pages 1097–1105, 2012.
  • Li et al. [2016] Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? In International Conference on Learning Representation (ICLR ’16), 2016.
  • Raghu et al. [2017] Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems, pages 6078–6087, 2017.
  • Simonyan and Zisserman [2014] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

Appendix A Omitted Proofs in Section 3

Lemma 2 (Union-Close Lemma).

Let and be two -approximate matches in . Then is still an -approximate match.

Proof.

The lemma follows immediately from the definition of -approximate match. ∎

Theorem 5 (Decomposition Theorem).

Every match in can be expressed as a union of simple matches. Formally, there are simple matches satisfying and .

Proof.

We prove by induction on the size of the match, . When is the smallest among all non-empty matches, we know is itself a simple match, so the theorem holds. For larger , may not be a simple match, and in this case we know is the union of smaller matches : and , and thus by the induction hypothesis that every is a union of simple matches, we know is a union of simple matches. ∎

Lemma 6 (Intersection-Close Lemma).

Assume and are both linearly independent. Let and be exact matches in . Then, is still an exact match.

Lemma 6 is a direct corollary of the following claim.

Claim 15.

Assume is linearly independent. Then .

Proof.

is obvious. To show , let’s consider a vector . Note that is linearly independent, so there exists unique for each s.t. . The uniqueness of and the fact that shows that . Similarly, . Therefore, only when , so . ∎

Theorem 8.

Assume and are both linearly independent. Let be the maximum (exact) match in . , the -minimum match is a simple match, and every simple match is the -minimum match for some neuron .

Proof.

We show that under the assumption of Theorem 8 the concept of -minimum match and the concept of -minimal match (Definition 9) coincide so Theorem 8 is a special case of Theorem 10.

According to Lemma 6, has a unique -minimum match being the intersection of all the matches containing . Therefore, for any -minimal match , it holds that , and according to Definition 9 we have . ∎

Theorem 10.

Let be the maximum match in . , every -minimal match is a simple match, and every simple match is a -minimal match for some .

Proof.

We start by showing the first half of the lemma. To prove by contradiction, let’s assume that a -minimal match can be written as the union of smaller matches , i.e., . In this case, one of the matches contains , which is contradictory with the definition of -minimal match.

Now we show the second half of the lemma. For any neuron in a simple match , we consider one of the smallest matches in containing . Here, “smallest” means that is the smallest among all matches in containing . Note that is itself a match containing , such a smallest match indeed exists. The fact that is the smallest implies that it’s a -minimal match. Now, trivially we have and , and since is a simple match, one of has to be equal to , which proves the second half of the lemma. ∎

Remark 3.

One important thing to note is that “” in the second half of Lemma 10 cannot be replaced by “”. For example, let and . In this case, the entire match cannot be expressed as a union of -minimal matches for because no -minimal match contains the vector in . However, in the case of Theorem 8 when and are both linearly independent, we can perform the replacement. That is because in this case, every match satisfies , so we know and imply .

Appendix B Omitted Proofs in Section 4

Theorem 11.

Algorithm 1 outputs the maximum match and runs in polynomial time.

Proof.

Every time we delete a neuron (or ) from (or ) at Line 7 (or Line 13), we make sure that the activation vector (or ) cannot be linearly expressed by (or ) within error , so always contains the maximum match. On the other hand, when the algorithm terminates, we know , is linearly expressible by within error and , is linearly expressible by within error , so is a match by definition. Therefore, the output of Algorithm 1 is a match containing the maximum match, which has to be the maximum match itself.

Before entering each iteration of the algorithm, we make sure that at least a neuron is deleted from or in the last iteration, so there are at most iterations. Therefore, the algorithm runs in polynomial time. ∎

Theorem 12.

Algorithm 2 outputs one -minimal match for the given neuron . If (exact match), the algorithm outputs the unique -minimum match provided and are both linearly independent. Moreover, the algorithm always runs in polynomial time.

Proof.

Clearly, Algorithm 2 runs in polynomial time. If there exist at least one -minimal matches, then has to be in the maximum match and thus the algorithm doesn’t return “failure”. Therefore, the remaining is to show that the match returned by the algorithm is indeed a -minimal match.

Clearly, the first requirement of -minimal match that is obviously satisfied by the algorithm. Now we prove that the second requirement is also satisfied. Consider a match with and that satisfies . We want to show that . To prove by contradiction, let’s suppose . Let’s consider and at Line 11 in the iteration when is being picked by the algorithm at Line 5. Since , we know and . In other words, is a match in . Moreover, since , we know the “if” condition at Line 11 is not satisfied, i.e., . Therefore, is a match in that is strictly larger than (note that ), a contradiction with being the maximum match in . ∎

Theorem 13.

Algorithm 3 outputs all the different -minimal matches in time . With Algorithm 3, we can find all the simple matches by exploring all based on Theorem 10.

Proof.

The fact that we remove all from and in each iteration implies that every time we put a match into , the match is different from the existing matches in . Moreover, every time we put a match into , the match is a -minimal match, so during the whole algorithm. Therefore, the running time of the algorithm is . The remaining is to show that returned by the algorithm contains all the -minimal matches. To prove by contradiction, suppose while there exists a -minimal match that is not in . By the fact that is minimal, we know is not a subset of for . Therefore, there exists for . Consider the iteration when we pick at Line 7. We know the “if” condition at Line 14 is satisfied because of , which then implies that , a contradiction. ∎

Theorem 14.

Suppose such that and both satisfy -strong linear independence and -stability. Then, . As a consequence, Algorithm 3 finds all the -minimal matches in polynomial time, and we can find all the simple matches in polynomial time by exploring all based on Theorem 10.

Theorem 14 is a direct corollary of the following lemma.

Lemma 16.

Suppose and both satisfy -strong linear independence and -stability. Let be two matches in . Then, is also a match.

Proof.

, let and . Therefore, , which implies that . Note that by -strong linear independence, we have the angle between and is at least . Therefore, , i.e., . Together with , we know