Input and Weight Space Smoothing for Semi-supervised Learning

05/23/2018
by   Safa Cicek, et al.
0

We propose regularizing the empirical loss for semi-supervised learning by acting on both the input (data) space, and the weight (parameter) space. We show that the two are not equivalent, and in fact are complementary, one affecting the minimality of the resulting representation, the other insensitivity to nuisance variability. We propose a method to perform such smoothing, which combines known input-space smoothing with a novel weight-space smoothing, based on a min-max (adversarial) optimization. The resulting Adversarial Block Coordinate Descent (ABCD) algorithm performs gradient ascent with a small learning rate for a random subset of the weights, and standard gradient descent on the remaining weights in the same mini-batch. It achieves comparable performance to the state-of-the-art without resorting to heavy data augmentation, using a relatively simple architecture.

READ FULL TEXT
research
02/06/2019

Semi-Supervised Learning by Label Gradient Alignment

We present label gradient alignment, a novel algorithm for semi-supervis...
research
05/05/2023

Random Smoothing Regularization in Kernel Gradient Descent Learning

Random smoothing data augmentation is a unique form of regularization th...
research
01/28/2023

Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by Coordinate Descent

Graph Semi-Supervised learning is an important data analysis tool, where...
research
10/03/2016

Semi-supervised Learning with Sparse Autoencoders in Phone Classification

We propose the application of a semi-supervised learning method to impro...
research
11/20/2017

Virtual Adversarial Ladder Networks For Semi-supervised Learning

Semi-supervised learning (SSL) partially circumvents the high cost of la...
research
03/28/2020

Gradient-based Data Augmentation for Semi-Supervised Learning

In semi-supervised learning (SSL), a technique called consistency regula...
research
12/12/2021

Learning with Subset Stacking

We propose a new algorithm that learns from a set of input-output pairs....

Please sign up or login with your details

Forgot password? Click here to reset