Using wavelets to analyze similarities in image datasets

02/24/2020
by   Roozbeh Yousefzadeh, et al.
30

Deep learning image classifiers usually rely on huge training sets and their training process can be described as learning the similarities and differences among training images. But, images in large training sets are not usually studied from this perspective and fine-level similarities and differences among images is usually overlooked. Some studies aim to identify the influential and redundant training images, but such methods require a model that is already trained on the entire training set. Here, we show that analyzing the contents of large training sets can provide valuable insights about the classification task at hand, prior to training a model on them. We use wavelet decomposition of images and other image processing tools to perform such analysis, with no need for a pre-trained model. This makes the analysis of training sets, straightforward and fast. We show that similar images in standard datasets (such as CIFAR) can be identified in a few seconds, a significant speed-up compared to alternative methods in the literature. We also show that similarities between training and testing images may explain the generalization of models and their mistakes. Finally, we investigate the similarities between images in relation to decision boundaries of a trained model.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

research
01/25/2021

Deep Learning Generalization and the Convex Hull of Training Sets

We study the generalization of deep learning models in relation to the c...
research
12/01/2020

Pre-Trained Image Processing Transformer

As the computing power of modern hardware is increasing strongly, pre-tr...
research
07/19/2022

Revealing Secrets From Pre-trained Models

With the growing burden of training deep learning models with large data...
research
02/01/2019

Do we train on test data? Purging CIFAR of near-duplicates

We find that 3.3 sets, respectively, have duplicates in the training set...
research
03/01/2021

Federated Learning without Revealing the Decision Boundaries

We consider the recent privacy preserving methods that train the models ...
research
03/20/2022

Over-parameterization: A Necessary Condition for Models that Extrapolate

In this work, we study over-parameterization as a necessary condition fo...
research
06/03/2022

Learning an Adaptation Function to Assess Image Visual Similarities

Human perception is routinely assessing the similarity between images, b...

Please sign up or login with your details

Forgot password? Click here to reset