Carpe Diem, Seize the Samples Uncertain "At the Moment" for Adaptive Batch Selection

11/19/2019
by   Hwanjun Song, et al.
0

The performance of deep neural networks is significantly affected by how well mini-batches are constructed. In this paper, we propose a novel adaptive batch selection algorithm called Recency Bias that exploits the uncertain samples predicted inconsistently in recent iterations. The historical label predictions of each sample are used to evaluate its predictive uncertainty within a sliding window. By taking advantage of this design, Recency Bias not only accelerates the training step but also achieves a more accurate network. We demonstrate the superiority of Recency Bias by extensive evaluation on two independent tasks. Compared with existing batch selection methods, the results showed that Recency Bias reduced the test error by up to 20.5 At the same time, it improved the training time by up to 59.3 same test error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2022

Mitigating Dataset Bias by Using Per-sample Gradient

The performance of deep neural networks is strongly influenced by the tr...
research
04/07/2023

Can we learn better with hard samples?

In deep learning, mini-batch training is commonly used to optimize netwo...
research
04/24/2017

Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples

Self-paced learning and hard example mining re-weight training instances...
research
06/29/2021

Adaptive Sample Selection for Robust Learning under Label Noise

Deep Neural Networks (DNNs) have been shown to be susceptible to memoriz...
research
08/22/2022

Selection Collider Bias in Large Language Models

In this paper we motivate the causal mechanisms behind sample selection ...
research
10/16/2020

Sliding-Window QPS (SW-QPS): A Perfect Parallel Iterative Switching Algorithm for Input-Queued Switches

In this work, we first propose a parallel batch switching algorithm call...
research
09/30/2022

Exploiting Selection Bias on Underspecified Tasks in Large Language Models

In this paper we motivate the causal mechanisms behind sample selection ...

Please sign up or login with your details

Forgot password? Click here to reset