GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning

12/19/2020
by   KrishnaTeja Killamsetty, et al.
1

Large scale machine learning and deep models are extremely data-hungry. Unfortunately, obtaining large amounts of labeled data is expensive, and training state-of-the-art models (with hyperparameter tuning) requires significant computing resources and time. Secondly, real-world data is noisy and imbalanced. As a result, several recent papers try to make the training process more efficient and robust. However, most existing work either focuses on robustness or efficiency, but not both. In this work, we introduce Glister, a GeneraLIzation based data Subset selecTion for Efficient and Robust learning framework. We formulate Glister as a mixed discrete-continuous bi-level optimization problem to select a subset of the training data, which maximizes the log-likelihood on a held-out validation set. Next, we propose an iterative online algorithm Glister-Online, which performs data selection iteratively along with the parameter updates and can be applied to any loss-based learning algorithm. We then show that for a rich class of loss functions including cross-entropy, hinge-loss, squared-loss, and logistic-loss, the inner discrete data selection is an instance of (weakly) submodular optimization, and we analyze conditions for which Glister-Online reduces the validation loss and converges. Finally, we propose Glister-Active, an extension to batch active learning, and we empirically demonstrate the performance of Glister on a wide range of tasks including, (a) data selection to reduce training time, (b) robust learning under label noise and imbalance settings, and (c) batch-active learning with several deep and shallow models. We show that our framework improves upon state of the art both in efficiency and accuracy (in cases (a) and (c)) and is more efficient compared to other state-of-the-art robust learning algorithms in case (b).

READ FULL TEXT

Authors

page 29

page 30

01/03/2019

Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision

Supervised machine learning based state-of-the-art computer vision techn...
05/28/2018

Learning From Less Data: Diversified Subset Selection and Active Learning in Image Classification Tasks

Supervised machine learning based state-of-the-art computer vision techn...
06/14/2021

RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning

Semi-supervised learning (SSL) algorithms have had great success in rece...
06/26/2019

Selection Via Proxy: Efficient Data Selection For Deep Learning

Data selection methods such as active learning and core-set selection ar...
12/27/2021

BALanCe: Deep Bayesian Active Learning via Equivalence Class Annealing

Active learning has demonstrated data efficiency in many fields. Existin...
09/26/2021

Data Summarization via Bilevel Optimization

The increasing availability of massive data sets poses a series of chall...
06/14/2022

Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

Training on web-scale data can take months. But most computation and tim...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.