Learning from Small Samples: Transformation-Invariant SVMs with Composition and Locality at Multiple Scales

09/27/2021
by   Tao Liu, et al.
3

Motivated by the problem of learning when the number of training samples is small, this paper shows how to incorporate into support-vector machines (SVMs) those properties that have made convolutional neural networks (CNNs) successful. Particularly important is the ability to incorporate domain knowledge of invariances, e.g., translational invariance of images. Kernels based on the minimum distance over a group of transformations, which corresponds to defining similarity as the best over the possible transformations, are not generally positive definite. Perhaps it is for this reason that they have neither previously been experimentally tested for their performance nor studied theoretically. Instead, previous attempts have employed kernels based on the average distance over a group of transformations, which are trivially positive definite, but which generally yield both poor margins as well as poor performance, as we show. We address this lacuna and show that positive definiteness indeed holds with high probability for kernels based on the minimum distance in the small training sample set regime of interest, and that they do yield the best results in that regime. Another important property of CNNs is their ability to incorporate local features at multiple spatial scales, e.g., through max pooling. A third important property is their ability to provide the benefits of composition through the architecture of multiple layers. We show how these additional properties can also be embedded into SVMs. We verify through experiments on widely available image sets that the resulting SVMs do provide superior accuracy in comparison to well-established deep neural network (DNN) benchmarks for small sample sizes.

READ FULL TEXT
research
10/09/2015

On the Definiteness of Earth Mover's Distance Yields and Its Relation to Set Intersection

Positive definite kernels are an important tool in machine learning that...
research
06/11/2021

Scale-invariant scale-channel networks: Deep networks that generalise to previously unseen scales

The ability to handle large scale variations is crucial for many real wo...
research
03/02/2023

Deep Neural Networks with Efficient Guaranteed Invariances

We address the problem of improving the performance and in particular th...
research
04/03/2020

Exploring the ability of CNNs to generalise to previously unseen scales over wide scale ranges

The ability to handle large scale variations is crucial for many real wo...
research
07/10/2018

An Empirical Approach For Probing the Definiteness of Kernels

Models like support vector machines or Gaussian process regression often...
research
07/04/2014

Expanding the Family of Grassmannian Kernels: An Embedding Perspective

Modeling videos and image-sets as linear subspaces has proven beneficial...
research
05/27/2014

Large Scale, Large Margin Classification using Indefinite Similarity Measures

Despite the success of the popular kernelized support vector machines, t...

Please sign up or login with your details

Forgot password? Click here to reset