Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization

09/23/2020
by   Prashnna Kumar Gyawali, et al.
10

The success of deep learning relies on the availability of large-scale annotated data sets, the acquisition of which can be costly, requiring expert domain knowledge. Semi-supervised learning (SSL) mitigates this challenge by exploiting the behavior of the neural function on large unlabeled data. The smoothness of the neural function is a commonly used assumption exploited in SSL. A successful example is the adoption of mixup strategy in SSL that enforces the global smoothness of the neural function by encouraging it to behave linearly when interpolating between training examples. Despite its empirical success, however, the theoretical underpinning of how mixup regularizes the neural function has not been fully understood. In this paper, we offer a theoretically substantiated proposition that mixup improves the smoothness of the neural function by bounding the Lipschitz constant of the gradient function of the neural networks. We then propose that this can be strengthened by simultaneously constraining the Lipschitz constant of the neural function itself through adversarial Lipschitz regularization, encouraging the neural function to behave linearly while also constraining the slope of this linear function. On three benchmark data sets and one real-world biomedical data set, we demonstrate that this combined regularization results in improved generalization performance of SSL when learning from a small amount of labeled data. We further demonstrate the robustness of the presented method against single-step adversarial attacks. Our code is available at https://github.com/Prasanna1991/Mixup-LR.

READ FULL TEXT
research
05/22/2020

Semi-supervised Medical Image Classification with Global Latent Mixing

Computer-aided diagnosis via deep learning relies on large-scale annotat...
research
10/01/2022

Learning Globally Smooth Functions on Manifolds

Smoothness and low dimensional structures play central roles in improvin...
research
05/20/2019

Semi-Supervised Learning by Augmented Distribution Alignment

In this work, we propose a simple yet effective semi-supervised learning...
research
06/14/2016

Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning

Effective convolutional neural networks are trained on large sets of lab...
research
03/02/2016

Asymptotic behavior of ℓ_p-based Laplacian regularization in semi-supervised learning

Given a weighted graph with N vertices, consider a real-valued regressio...
research
12/07/2020

Continuum Limit of Lipschitz Learning on Graphs

Tackling semi-supervised learning problems with graph-based methods have...
research
03/01/2021

Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis

Despite many proposed algorithms to provide robustness to deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset