1 Introduction
It is becoming increasingly important to learn well generalizing representations that are invariant to many common transformations of the data. These transformations can give rise to many ‘degrees of freedom’ even in a constrained task such as face recognition (
e.g. pose, agevariation, illumination etc.). In fact, explicitly factoring them out leads to improvements in recognition performance as found in Leibo et al. (2014); Hinton (1987). To this end, the study of invariant features is important. Anselmi et al. (2013) showed that features that are explicitly invariant to intraclass transformations allow the sample complexity of the recognition problem to be reduced.Prior Art: Invariant Kernels.
Kernel methods in machine learning have long been studied to considerable depth. Nonetheless, the study of invariant kernels and techniques to extract invariant features has received less attention. An invariant kernel allows the kernel product to remain invariant under transformations of the inputs. There has been some work on incorporating invariances into popular machinery such as the SVM in
Lauer & Bloch (2008). Most instances of incorporating invariances focused on local invariances through regularization and optimization such as Schölkopf et al. (1996, 1998); Decoste & Schölkopf (2002); Zhang et al. (2013). Some other techniques were jittering kernels (Schölkopf & Smola (2002); Decoste & Schölkopf (2002)) and tangentdistance kernels (Haasdonk & Keysers (2002)), both of which sacrificed the positive semidefinite property of its kernels and were computationally expensive. Haasdonk & Burkhardt (2007) had first used group integration to arrive at invariant kernels, however, their approach does not address two important problems that arise in practice (group observation through unlabelled samples and partially observed groups). We will shortly state these problems more concretely and will show that the invariant kernels proposed do in fact solve both problems.Prior Art: Invariance through dataset augmentation. Many approaches in the past have enforced invariance through generating transformed training samples in some form such as Poggio & Vetter (1992); Schölkopf & Smola (2002); Schölkopf et al. (1998); Niyogi et al. (1998); Reisert (2008); Haasdonk & Burkhardt (2007). This assumes that one has knowledge about the transformation. The approach presented in this paper however, under the unitarity assumption, can learn the transformations through unlabelled samples and does not need training dataset augmentation. Perhaps, the most popular method for incorporating invariances in SVMs is the virtual support method (VSV) in Schölkopf et al. (1996)
, which used sequential runs of SVMs in order to find and augment the support vectors with transformed versions of themselves.
Loosli et al. (2007) proposed a similar algorithm to generate and prune out examples. Though these methods have had some success, most of them still lack explicit theoretical guarantees towards invariance. The proposed invariant kernel SVM formulation on the other hand, is guaranteed to be invariant. Further, unlike VSV and other approaches to incorporate invariance into the SVM, the proposed invariant kernel SVM solves the common and important practical problems that we will state shortly. To the best of our knowledge, it is the first formulation to do so.Prior Art: Linear Invariant Features. Recently, Anselmi et al. (2013) proposed linear groupinvariant features as an explanation for multiple characteristics of the visual cortex. They achieve invariance in a slightly more general way than group integration, utilizing measures of the distribution characterizing an orbit of a sample under the action group. We extend the method to the RKHS using unitary kernels and extend some properties regarding invariance and stability. We also show that the extension can solve both motivating problems (Problem 1 and Problem 2). This leads to a practical way of extracting nonlinear invariant features with theoretical guarantees.
Motivating Problems. We now state the two central problems that this paper tries to address through invariant kernels and features. A common practical problem that one faces utilizing previous methods involving generating transformed samples is the computational expense of generating and processing them (including virtual support vectors). Further, in many cases transformed labelled samples are unavailable.Two important problems that arise when practically applying invariant kernels and features are:
Problem 1: (Group observed through unlabelled samples) The transformed versions of the training labelled data are not available i.e. one might only have access to transformed versions of unlabelled data outside of the training set (theoretically equivalent to having transformed versions of arbitrary vectors), e.g. only unlabelled transformed images are observed.
Problem 2: (Partially observed group) Not all members of the group (symmetric set) of transformations are observed i.e. the group is only partially observed through its actions, e.g. not all transformations of an image are observed. In many practical cases, partial invariance is in fact necessary, when a transformation from one class to another exists.
Group Theory and Invariance.
Towards this goal, the study of incorporating invariance through group integration seems useful. Group theory is an elegant way to model symmetry. Classical invariant theory provides group integration techniques to enforce invariance. Group integration can also be used to model mean pooling (and max pooling albeit in a different framework as proposed in
Anselmi et al. (2013)), which is in implicit use in several areas of machine learning and computer vision. The transformations, in this paper, are modelled as
unitary and collectively form the unitarygroup . Classes of learning problems, such as vision, often have transformations belonging to the unitarygroup, that one would like to be invariant towards (such as translation and rotation). The results can also be extended to discrete groups. In practice however, Liao et al. (2013) found that invariance to much more general transformations not captured by this model can been achieved.We will see that given explicit access to
, one can theoretically capitalize on properties such as guaranteed global invariance (as opposed to local invariance in optimization approaches, where the classifier is invariant to only small transformations). However, controlled
local invariance can also be achieved. Local invariance is important when the extreme transformation of one class overlaps with another. The unitary property of the group and the unitary restriction on kernels (in Section 2.1), allow the development of theoretical motivation for existing techniques, an invariant kernel and invariant kernel features theoretically addressing Problems 1 and 2.Contributions. We list our main contributions below:

In contrast to many previous studies on invariant kernels, our focus is to study positive semidefinite unitarygroup invariant kernels and features guaranteeing invariance that can address both Problem 1 and Problem 2 .

One of our central results to applying group integration in the RKHS builds on the observation that, under unitary restrictions on the kernel map, group action is preserved in the RKHS.

Using the proposed invariant kernel, we present a theoretically motivated alternate approach to designing a nonlinear invariant SVM that can handle both Problem 1 and Problem 2 with explicit invariance guarantees.

We propose kernel unitarygroup invariant feature extraction techniques by extending a theory of linear group invariant features presented in
Anselmi et al. (2013). We show that the kernel extension addresses both Problem 1 and Problem 2 and preserves properties such as global (and local) invariance and stability.
Organization. The paper is broadly organized into two parts. Section 2 and 3 present the proposed invariant kernels and the invariant kernel SVM, whereas Section 4 and 5 present the proposed invariant features extracted using kernels.
Section 2 and 3 (Unitarygroup Invariant Kernels). We first present some important known elementary unitarygroup integration properties and present a central result to applying group integration in the RKHS in Section 2. We then present a theoretically motivated alternate approach to designing a nonlinear invariant SVM and present a simple albeit important result to reduce computation. In Section 3, we then continue on to develop an invariant kernel which does not require to observe transformed versions of the input arguments whatsoever.
Section 4 and 5 (Unitarygroup Invariant Kernel Features). In Section 4, we propose kernel unitarygroup invariant feature extraction techniques by extending a linear invariant feature extraction method (Anselmi et al. (2013)) to the kernel domain. We show that the resultant feature, while addressing Problem 1, preserves important properties such as global invariance and stability. In Section 5, we show that a simple extension of the method can help it to solve both problems (Problem 1 and Problem 2). This leads to a practical way of extracting invariant nonlinear features with theoretical guarantees.
Section 6 presents some experiments illustrating our methods.
2 Globally Group Invariant Kernels: When the group is explicitly known
Premise: Consider a dataset of normalized samples along with labels with and . We now introduce into the dataset a number of unitary transformations part of the locally compact unitarygroup (in general we require local compactness to enable the existence of the Haar measure). Our augmented normalized dataset becomes ^{1}^{1}1With a slight abuse of notation, we denote by the action of group element on . Thus, . We assume for now that is known and accessible completely. Let be some mapping to a high dimensional Hilbert space , i.e.. Once the points are mapped, the problem of learning a separator in that space can be assumed to be linear.
An invariant function is defined as follows.
Definition 2.1 (Invariant Function).
For any group , we define a function to be invariant if .
One method of generating an invariant towards a group is through group integration. Group integration has stemmed from classical invariant theory and its foundational theorem was proved by Haar.
Theorem 2.1.
(Haar) On every locally compact group there exists at least one left invariant integral. Such an integral is unique except for a strictly positive factor of proportionality.
One can choose the factor of proportionality such that the group volume equals 1 (i.e. , in the case of discrete finite groups each group element would be scaled down by ). For compact groups, such an integral converges for any bounded function of the group. For discrete groups, the integral is replaced by a sum. Group integration can be shown to be a projection onto a invariant subspace. Such a subspace can be defined for a Hilbert space by . An invariant to any group can be generated through the following basic (previously) known property (Lemma 2.2) based on group integration.
Lemma 2.2.
(Invariance Property) Given a vector , and any group , for any fixed and a normalized Haar measure , we have
The Haar measure () exists for every locally compact group and is unique upto a positive multiplicative constant (hence normalized). A similar property holds for discrete groups. The Invariance Property results in global invariance to group . This property allows one to generate a invariant subspace in the inherent space .
The following two lemmas (Lemma 2.3 and 2.4) showcase (novel) elementary properties of the operator for the unitarygroup . These properties would prove useful in the analysis of unitarygroup invariant kernels and features.
Lemma 2.3.
If for unitary , then
Lemma 2.4.
(Unitary Projection) If for any , then , i.e. it is a projection operator. Further, if is unitary, then
The proofs of these lemmas utilize elementary properties of groups, invariance of the Haar measure and the unitarity of ^{2}^{2}2All proofs are presented in the supplementary material.
Sample Complexity and Generalization. On applying the operator to the dataset , all points in the set for any map to the same point in the invariant subspace. Theoretically, this would drastically reduce sample complexity while preserving linear feasibility (separability). It is trivial to observe that a perfect linear separator learnt in would also be a perfect separator for , thus in theory achieving perfect generalization. We prove a similar result for the RKHS case in Section 2.2. This property is theoretically powerful since cardinality of can be large. A classifier can avoid having to observe transformed versions of any and yet generalize.
2.1 Group Actions Reciprocate in a Reproducing Kernel Hilbert Space
Group integration provides exact invariance in the domain of . However, it requires the group structure to be preserved. In the context of kernels, it is imperative that the group relation between the samples be preserved in the kernel Hilbert space corresponding to some kernel . Under the restriction of unitary , this is possible. We present an elementary albeit important result that allows this after defining unitary kernels in the following sense.
Definition 2.2 (Unitary Kernel).
We define a kernel to be a unitary kernel if, for a unitary group , the mapping satisfies .
The unitary condition is fairly general, a common class of unitary kernels is the RBF kernel. We now define an operator for any where is unitary. thus is a mapping within the RKHS. Under unitary , we then have the following result.
Theorem 2.5.
(Covariance in the RKHS) If is a unitary kernel in the sense of Definition 2.2, then is unitary, and the set is a unitarygroup in .
Theorem 2.5 shows that the unitarygroup structure is preserved in the RKHS. This provides new theoretically motivated approaches to achieve invariance in the RKHS. Specifically, a theory of invariance which was proposed to utilize unsupervised linear filters can now also utilize nonlinear supervised ‘templates’ as we discuss in Section 4.
2.2 Invariant Nonlinear SVM: An Alternate Approach Through Group Integration
We present the group integration approach to kernel SVMs before comparing it to other methods. The decision function of SVMs can be written in the general form as for some bias (we agglomerate all parameters of in ) where is the feature map, i.e.
. Reviewing the SVM, a maximum margin separator is found by minimizing loss functions such as the hinge loss along with a regularizer. In order to invoke invariance, we can now utilize group integration in the the kernel space
using Theorem 2.5. All points in the set get mapped to for a given . Group integration then results in a invariant subspace within through using Lemma 2.2. Introducing Lagrange multipliers , the dual formulation (utilizing Lemma 2.3 and Lemma 2.4) then becomes(1) 
under the constraints . The separator is then given by thereby existing in the invariant (or equivalently invariant) subspace within (since is a bijection). Effectively, the SVM observes samples from . If is known, then this provides exact global invariance during testing. Further, is a maximummargin separator of . This can be shown by the following result.
Theorem 2.6.
(Generalization) For a unitary group and unitary kernel , if is a perfect separator for , then is also a perfect separator for with the same margin. Further, a maxmargin separator of is also a maxmargin separator of .
The invariant nonlinear SVM in objective 1, observes samples in the form of and obtains a maxmargin separator . Theorem 2.6 shows that the margins of and are deeply related and implies that is a maxmargin separator for both datasets. Theoretically, the Invariant nonlinear SVM is able to generalize to on just observing and utilizing prior information in the form of for all unitary kernels . This is true in practice for linear kernels. For nonlinear kernels in practice, however, the invariant SVM still needs to observe and integrate over transformed training inputs. We also present the following result for unitarygroup invariant kernels which helps in saving computation.
Lemma 2.7.
(Invariant Projection) If for any unitary group , then for any fixed we have
We provide the proof in the supplementary material. Thus, the kernel in the Invariant SVM formulation can be replaced by the form , thereby reducing the number of transformed training samples required to be observed by an order of magnitude. It also allows for the kernel to be invariant to the orbit of , i.e. with observing just a single arbitrary point () on the orbit. Nonetheless, as the formulation stands, it still requires observing the entire orbit of atleast one of the transformed training samples. However, we can get around this fundamental problem as we show in the next section (Section 3).
Note that for the general kernel, the invariant subspace cannot be explicitly computed, it is only implicitly projected upon through . It is important to note that during testing however, the SVM formulation will be invariant to transformations of the test sample regardless of a linear or nonlinear kernel. Also, interestingly, might be a different decision boundary than obtained by training the vanilla SVM on .
Positive SemiDefiniteness. The invariant kernel map is now of the form . This preserves the positive semidefinite property of the kernel while guaranteeing global invariance to unitary transformations., unlike jittering kernels (Schölkopf & Smola (2002); Decoste & Schölkopf (2002)) and tangentdistance kernels (Haasdonk & Keysers (2002)). If we wish to include invariance to scaling however, then we would lose positivesemidefiniteness (it is also not a unitary transform). Nonetheless, Walder & Chapelle (2007) show that conditionally positive definite kernels still exist for transformations including scaling, although we focus of unitary transformations in this paper.
Partial Invariance. The invariant kernel SVM formulation (objective 1) also supports partial invariance when is not fully observed (addressing Problem 2), a notion extended to invariant kernel methods in Section 5. Partial invariance gives one control over the degree of invariance over transformation groups, allowing classes that are transformations of one another (such as MNIST classes and ) to be discriminated.
Relating the Virtual Support Vector Method (VSV): Consider the popular Virtual Support Vector Method (VSV) (Schölkopf et al. (1996)). Here the support vectors are augmented with small (finite) number of transformed versions of themselves. This assumes that the transformations are explicitly known, thereby failing to address Problem 1. The augmented training set is used to train another SVM with improved invariance. We show in the following section that the Invariant SVM formulation (objective 1), on the other hand, does address Problem 1. The group integration framework provides a theoretical motivation for the VSV, since at minimum, it suggests having transformed versions of the support vectors. The VSV however, can have different for different transformed versions of a , whereas group integration would force them to be the same because the kernel is invariant. For linear kernels we have more benefits. Group integration also suggests building an explicit invariant subspace before projecting the training set on it. This approach does not increase computation time (for linear kernels) while allowing the SVM to generalize to transformed inputs.
3 Globally Group Invariant Kernels: When action of group is observed only on unlabelled data
The previous section introduced a group integration approach to the invariant nonlinear SVM. Although the formulation addresses Problem 2, it does not address Problem 1 i.e. the kernel still requires observing transformed versions of the labelled input sample namely (or atleast one of the labelled samples if we utilize Lemma 2.7). We now present an approach to not require the observation of any labelled training sample whatsoever.
Assume that for every sample , there exists a vector s.t. , where is an arbitrary unlabelled set (in the form of a columnmajor matrix) of arbitrary templates (Note that there exist more informed ways of choosing , however to keep the theory general we work with arbitrary template sets). We assume that we have access to transformed versions of each template i.e. we observe only through . We then have the following result.
Theorem 3.1.
For a unitary group , a template set and a unitary kernel , if and , then the invariant kernel can be written as
Theorem 3.1 assumes that the points lies in the span of . It allows the kernel to be invariant for i.e. . It achieves this while only observing transformed versions of the unlabelled template set . This is very useful since the use of Theorem 3.1 solves Problem 1 while guaranteeing invariance. Further, in practice, one does not need to have explicit knowledge of the transformations. In many cases, they can simply store the naturally transforming samples (e.g. transforming images). A constructed kernel can be applied to any dataset directly provided the same group acts. Coefficients required for Theorem 3.1 for any can be approximated by projecting the sample onto the space spanned by in the RKHS i.e. . This assumes that the kernel matrix is invertible, a condition that can be satisfied by construction.
Invariant Nonlinear SVM through transformed unlabelled data (comparison with the VSV): The invariant kernel SVM in objective 1 using the invariant kernel achieves invariance through learning the transformation only through observed unlabelled data. Further, it does not need multiple runs as opposed to the VSV which requires the generation of transformed labelled examples. Theorem 3.1 allows an invariant kernel to be used directly without the computational expense of finding potential support vectors, generating transformations of them and then processing the added samples. Further, invariance helps to reduce sample complexity and improve performance given a number of samples, a phenomenon we observe in our experiments.
4 Globally Group Invariant Kernel Features from a Single Sample: When action of is observed only on unlabelled data
Up until now we have studied the properties of the proposed unitary group invariant kernels. We now shift our attention to group invariant features. Invariant kernels are a form of an invariant similarity measure and can be used to construct invariant feature maps. Anselmi et al. (2013) proposed linear invariant features that enjoys properties such as global invariance and stability. We extend their method to the RKHS using unitary kernels and extend the invariance and stability properties. We now briefly present their theory of invariance.
4.1 Theory of Linear Invariant Features
Under , the orbit of any sample is defined by . As a straightforward albeit elegant observation, the orbit itself is an invariant under , since . Measures of such an orbit also provide invariance, such as the high dimensional distribution induced by the group’s action on . In fact, Anselmi et al. (2013) show to be both invariant and unique, i.e. , where denotes membership in the same class. Thus, measures of the distribution, through a finite number of onedimensional projections , can be used as a similarity measure between two orbits ^{3}^{3}3This follows from CramerWold theorem along with concentration of measures.. Further, the measures are invariant to the action of unitary group . For unitary group , normalized dotproducts and an arbitrary template
, an empirical estimate of the 1dimensional distribution of the projection onto template
can be expressed as , where , a nonlinearity, can either estimate the th bin of the CDF or theth moment, the set of which together define
. In practice, Liao et al. (2013) found that a few or even one of these moments has been shown to be sufficiently invariant. The final signature or feature vector is .4.2 Group Invariant Feature Extraction in Kernel Space from a Single Sample
We now present a kernel extension of the approach to invariance presented above. We assume access to the set , i.e. the orbits of arbitrary unlabelled vectors or templates. For simplicity, we also assume a compact unitary with finite cardinality . Then for every , we have template . Similarly, for unitary kernels (Definition 2.2), the templates in the RKHS behave as transformed versions of each other owing to Theorem 2.5. Therefore, . Thus, form a set of transformed elements for each under the action of . Invariance can then be achieved using a form of Equation 7 in Anselmi et al. (2013).
(2) 
can extract nonlinear kernel features for any single sample that are invariant to the group without ever needing to observe ^{4}^{4}4Note that even though the features extracted are nonlinear, invariance generated is purely towards unitary transformations.. This also solves Problem 1 listed in the introduction. Recall that can either estimate the CDF or the set of moments. In the case of moments, the first moment leads to mean pooling and the infinite moment results in max pooling. We now show that the kernel feature continues to satisfy useful properties such as stability (in the Lipschitz sense) i.e. a form of a stability result in Anselmi et al. (2013) can be proved using a similar analysis.
Theorem 4.1.
(Stability) If is invariant to a unitarygroup and the nonlinearities are Lipschitz continuous with constant , with , and we have a normalized unitary kernel with , then
for all . Here is the kernel distance in the Hausdorff sense in , i.e. .
A good representation ideally should be stable and the distance between two points in the feature space should be bounded. Unstable representations can skew the feature space and allow for degenerate results. Theorem
4.1 shows that under Lipschitz continuity for the estimation functions and , the kernel feature distance is bounded by the kernel product.Discriminative templates: Equation 2 can be instantiated to extract discriminative kernel features, by choosing discriminative instead of arbitrary templates. Let , then for each group element , one can train binary one vs. all classifiers with the template () labelled as and the rest () as for all . Recall that the separator (for template as and for group element ) can be expressed (using Theorem 2.5 and where is the identity element of ). Thus, form a set of transformed templates for each under the action of using which partial invariance can then be achieved through Equation 2^{5}^{5}5Since this is agnostic to the selection of , any classifier which can be expressed as a linear combination of the samples in
(such as the perceptron, SVM, correlation filters) can be used as discriminative templates to generate invariance.
.5 Towards Partially Group Invariant Kernels: When the group is partially observed through transformed samples
We extend the notion of partial invariance to the kernel features extracted similar to Equation 2 following the analysis of Anselmi et al. (2013). Partial invariance arises from partially observing the group , i.e. observing only a finite group (may not be a subgroup) . In practice, this is the most likely case. However, partial invariance can be obtained over the observed subset through a local kernel feature, which can also be generalized to locally compact groups. A partially invariant kernel feature is .
Uniqueness: The analysis for uniqueness in Anselmi et al. (2013) can be applied to with no significant changes, since the group structure is preserved in through Theorem 2.5. In summary, any two partial orbits with a common point are identical.
Invariance: Theorem 6 from Anselmi et al. (2013) can be applied in with some modification.
Theorem 5.1.
(Partially Invariance) Let are a set of bijective and positive functions and be a locally compact group. Further, assuming and (where supp() denotes the support), then and , we have .
Stability: is stable (in the Lipschitz sense) following the analysis of Theorem 4.1. In particular, we have the following result.
Theorem 5.2.
(Stability of Partially Invariant Feature) If is partially invariant to the group and the nonlinearities are Lipschitz continuous with constant , with , and we have kernel with , then for unitary , if and assuming , then if . Further, if then for all . Here is the kernel distance in the Hausdorff sense over in , i.e.
Thus can achieve partial invariance provided a limited number of transformations of the unlabelled data. Further, results developed for kernel methods in this section encourage their use in practice since the feature now solves both motivating problems mentioned in Section 1. Note that the notion of and results on partial invariance can be easily applied to the invariant kernel proposed in Section 2 and 3 thereby making them practical tools with theoretical guarantees.
Dataset  Raw  Raw  

banana  55.2  55.2  60.56  61.70  60.37 
breast  71.34  64.77  71.56  70.42  71.58 
german  76.03  62.48  69.63  69.78  69.63 
diebetis  75.67  50.35  65.84  66.20  65.84 
image  83.81  57.05  57.62  60.27  57.49 
splice  84.54  55.03  55.07  79.83  55.07 
thyroid  91.53  52.67  66.76  64.63  68.46 
ringnorm  77.26  43.68  57.67  56.48  56.85 
twonorm  97.59  31.48  69.21  66.06  64.69 
waveform  89.87  63.77  65.50  64.64  66.89 
6 Experimental Validation
Goal: The goal of this section is two fold, to see (1) whether partially invariant kernel features and (2) invariant kernel SVM i.e. objective 1 coupled with Theorem 3.1, in practice, are able to address Problem 1 and Problem 2, and (3) whether kernel invariant features offer any advantage over linear invariant features . We refrain from using discriminative kernel features since our theoretical results does not assume any structure for the templates.
Setup and Method: We use 10 normalized datasets (each with number of samples ) from the UCI ML repository for this task. We form a random 10fold crossvalidation partition (training ()/testing ()) for each dataset . In order to enforce Problem 1, we introduce a number of transformations belonging the a randomly chosen set of unitary transformations into the test data () thereby multiplying the test data size by a factor of , thus obtaining (we set )^{6}^{6}6We uniformly set to a reasonably modest value of 10 in order to keep computational load of multiplying the dataset manageable.. However, we do not augment the training data . We instead generate random vectors or templates and augment them using the same unitary transformations as the test data (we set for all experiments). This enforces Problem 1. Problem 2 is inherently enforced to a large degree since it is practically very difficult to generate an entire group. The transformations we introduce are a subset of the unitary group i.e. with .
Dataset  S.K  I.K  

banana  72.34  47.16  50.67 
breast  66.15  62.31  67.27 
german  70.60  70.00  70.07 
diabetis  73.68  44.67  65.39 
image  95.96  43.08  56.34 
splice  58.93  55.08  55.08 
thyroid  90.48  64.81  64.57 
ringnorm  69.16  43.83  50.46 
twonorm  97.11  32.66  49.96 
waveform  72.86  67.06  67.06 
For our first experiment, we compute using the randomly generated transformed templates and use the RBF kernel () and the polynomial kernel ( with degree ). We set to compute the infinite moment equivalent to maxpooling. As an evaluation, to estimate the separability of the data, we train a linear SVM on the unaugmented (not transformed) data () using (1) raw features (Raw baseline), (2) linear invariant features ( baseline), (3) RBF kernel invariant features () and (4) polynomial kernel invariant features (). We then test on the augmented corresponding fold (transformed) of the test data () after extracting the corresponding feature. We also report the test accuracy of testing on raw as an illustration of the classification difficulty introduced by the transformations added in. The results are summarized in Table 1. For our second experiment, we use the same datasets and generate a random 10fold partition. Here we always train on the raw untransformed () fold and test on the raw transformed data (). We train a standard RBF kernel SVM () and an invariant SVM using the same kernel as described in Theorem 3.1. We also test the standard kernel SVM on the untransformed data () as an illustration of the classification difficulty introduced due to transformations. The results are summarized in Table 2.
Results: Our first observation is that in almost all of the datasets, even the modestly () added transformations significantly impaired the SVM’s performance (Table 1 and Table 2). Thus we confirm that most of the difficulty in the problem of learning arises from the presence of inherent transformations relating different orbits of the data. Secondly, for both experiments explicitly generating invariance through invariant features (Table 1) and through the invariant kernel (Table 2) helps in performance suggesting that in both cases sample complexity was lowered. We find that invariant kernel features and the invariant kernel (Theorem 3.1), in practice as well, address Problem 1 and Problem 2. Kernel features, in general, modestly outperform linear features in most of these datasets since even though the features are nonlinear, the transformation they are invariant to are linear.
7 Conclusion
One of the main handicaps in applying invariant kernel methods was the computational expense in generating and processing additional transformed form of the data. Further, in many cases it is difficult to generate such samples due to the transformation being unknown. However, in many cases, it is easier to obtain transformed unlabelled samples (such as video sequences in vision). The invariant kernels described in this paper can be used to address such issues while theoretically guaranteeing invariance.
References
 Anselmi et al. (2013) Anselmi, Fabio, Leibo, Joel Z., Rosasco, Lorenzo, Mutch, Jim, Tacchetti, Andrea, and Poggio, Tomaso. Unsupervised learning of invariant representations in hierarchical architectures. CoRR, abs/1311.4158, 2013. URL http://arxiv.org/abs/1311.4158.

Decoste & Schölkopf (2002)
Decoste, Dennis and Schölkopf, Bernhard.
Training invariant support vector machines.
Mach. Learn., 46(13):161–190, March 2002.  Haasdonk & Keysers (2002) Haasdonk, B. and Keysers, D. Tangent distance kernels for support vector machines. In Pattern Recognition, 2002. Proceedings. 16th International Conference on, volume 2, pp. 864–868 vol.2, 2002.
 Haasdonk & Burkhardt (2007) Haasdonk, Bernard and Burkhardt, Hans. Invariant kernel functions for pattern analysis and machine learning. In Machine Learning, pp. 35–61, 2007.
 Hinton (1987) Hinton, Geoffrey E. Learning translation invariant recognition in a massively parallel networks. In PARLE Parallel Architectures and Languages Europe, pp. 1–13. Springer, 1987.
 Lauer & Bloch (2008) Lauer, Fabien and Bloch, Gérard. Incorporating prior knowledge in support vector machines for classification: A review. Neurocomputing, 71(7):1578–1594, 2008.
 Leibo et al. (2014) Leibo, Joel Z, Liao, Qianli, and Poggio, Tomaso. Subtasks of unconstrained face recognition. In International Joint Conference on Computer Vision, Imaging and Computer Graphics, VISIGRAPP, 2014.
 Liao et al. (2013) Liao, Q., Leibo, J. Z., and Poggio, T. Learning invariant representations and applications to face verification. Advances in Neural Information Processing Systems (NIPS), 2013.
 Loosli et al. (2007) Loosli, Gaëlle, Canu, Stéphane, and Bottou, Léon. Training Invariant Support Vector Machines using Selective Sampling. In Large Scale Kernel Machines, pp. 301–320. MIT Press, Cambridge, MA., 2007.
 Niyogi et al. (1998) Niyogi, P., Girosi, F., and Poggio, T. Incorporating prior information in machine learning by creating virtual examples. In Proceedings of the IEEE, pp. 2196–2209, 1998.
 Poggio & Vetter (1992) Poggio, T. and Vetter, T. Recognition and structure from one 2d model view: Observations on prototypes, object classes and symmetries. Laboratory, Massachusetts Institute of Technology, 1992.
 Reisert (2008) Reisert, Marco. Group integration techniques in pattern analysis – a kernel view. PhD Thesis, 2008.
 Schölkopf et al. (1998) Schölkopf, B., Simard, P., Smola, A., and Vapnik, V. Prior knowledge in support vector kernels. Advances in Neural Information Processing Systems (NIPS), 1998.
 Schölkopf et al. (1996) Schölkopf, Bernhard, Burges, Chris, and Vapnik, Vladimir. Incorporating invariances in support vector learning machines. pp. 47–52. Springer, 1996.
 Schölkopf & Smola (2002) Schölkopf, Bernhard and Smola, Alexander J. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002.
 Walder & Chapelle (2007) Walder, Christian and Chapelle, Olivier. Learning with transformation invariant kernels. In Advances in Neural Information Processing Systems, pp. 1561–1568, 2007.
 Zhang et al. (2013) Zhang, Xinhua, Lee, Wee Sun, and Teh, Yee Whye. Learning with invariance via linear functionals on reproducing kernel hilbert space. In Advances in Neural Information Processing Systems, pp. 2031–2039, 2013.
8 Supplementary Material
8.1 Proof of Lemma 2.2
Proof.
We have,
Since the normalized Haar measure is invariant, i.e. . Intuitively, simply rearranges the group integral owing to elementary group properties. ∎
8.2 Proof of Lemma 2.3
Proof.
We have,
Using the fact and . ∎
8.3 Proof Lemma 2.4
Proof.
We have,
Since the Haar measure is normalized (), and invariant. Also for any , we have ∎
8.4 Proof of Lemma 2.7
Proof.
We have
In the second equality, we fix a group element since the innerproduct is invariant using the argument . This is true using Lemma 2.2 and the fact that is unitary. Further, the final equality utilizes the fact that the Haar measure is normalized. ∎
8.5 Proof of Theorem 2.5
Proof.
We have , since the kernel is unitary. Here we define as the action of on . Thus, the mapping preserves the dotproduct in while reciprocating the action of . This is one of the requirements of a unitary operator, however needs to be linear. We note that linearity of can be derived from the linearity of the inner product and its preservation under in . Specifically for an arbitrary vector and a scalar , we have
(3)  
(4)  
(5) 
Similarly for vectors , we have
We now prove that the set is a group. We start with proving the closure property. We have for any fixed
Since therefore by definition. Also, and thus closure is established. Associativity, identity and inverse properties can be proved similarly. The set is therefore a unitarygroup in . ∎
8.6 Proof of Theorem 2.6
Proof.
Since is a perfect separator for , , s.t. .
Using Lemma 2.4 and Theorem 2.5, we have for any fixed ,
Hence,
Thus, is perfect separator for with a margin of atleast . It also implies that a maxmargin separator of is also a maxmargin separator of . ∎
8.7 Proof of Theorem 3.1
Proof.
For any fixed we find, using Lemma 2.4. Choosing to be identity and substituting the expansion of , and we have the desired result. ∎
8.8 Proof of Theorem 4.1
Proof.
Since are Lipschitz continuous , for each component of the signature , we have
(7)  
(8)  
(9) 
where we utilize CauchySchwartz, Theorem 2.5 and the fact that for some , we have . Since is invariant to the action of (and consequently ), . If , then the map is a contraction and we obtain the desired result by summing over all components and dividing by . ∎
Comments
There are no comments yet.