Preventing Manifold Intrusion with Locality: Local Mixup

01/12/2022
by   Raphael Baena, et al.
0

Mixup is a data-dependent regularization technique that consists in linearly interpolating input samples and associated outputs. It has been shown to improve accuracy when used to train on standard machine learning datasets. However, authors have pointed out that Mixup can produce out-of-distribution virtual samples and even contradictions in the augmented training set, potentially resulting in adversarial effects. In this paper, we introduce Local Mixup in which distant input samples are weighted down when computing the loss. In constrained settings we demonstrate that Local Mixup can create a trade-off between bias and variance, with the extreme cases reducing to vanilla training and classical Mixup. Using standardized computer vision benchmarks , we also show that Local Mixup can improve test accuracy.

READ FULL TEXT
research
09/13/2020

Manifold attack

Machine Learning in general and Deep Learning in particular has gained m...
research
09/07/2018

MixUp as Locally Linear Out-Of-Manifold Regularization

MixUp, a data augmentation approach through mixing random samples, has b...
research
12/19/2019

Per-sample Prediction Intervals for Extreme Learning Machines

Prediction intervals in supervised Machine Learning bound the region whe...
research
10/10/2022

DALE: Differential Accumulated Local Effects for efficient and accurate global explanations

Accumulated Local Effect (ALE) is a method for accurately estimating fea...
research
01/12/2021

Improving Classification Accuracy with Graph Filtering

In machine learning, classifiers are typically susceptible to noise in t...
research
11/13/2015

Standard methods for inexpensive pollen loads authentication by means of computer vision and machine learning

We present a complete methodology for authenticating local bee pollen ag...

Please sign up or login with your details

Forgot password? Click here to reset