Using the Overlapping Score to Improve Corruption Benchmarks

05/26/2021
by   Alfred Laugros, et al.
0

Neural Networks are sensitive to various corruptions that usually occur in real-world applications such as blurs, noises, low-lighting conditions, etc. To estimate the robustness of neural networks to these common corruptions, we generally use a group of modeled corruptions gathered into a benchmark. Unfortunately, no objective criterion exists to determine whether a benchmark is representative of a large diversity of independent corruptions. In this paper, we propose a metric called corruption overlapping score, which can be used to reveal flaws in corruption benchmarks. Two corruptions overlap when the robustnesses of neural networks to these corruptions are correlated. We argue that taking into account overlappings between corruptions can help to improve existing benchmarks or build better ones.

READ FULL TEXT
research
07/26/2021

Using Synthetic Corruptions to Measure Robustness to Natural Distribution Shifts

Synthetic corruptions gathered into a benchmark are frequently used to m...
research
02/14/2023

READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

For many real-world applications, the user-generated inputs usually cont...
research
05/28/2018

Confidence Prediction for Lexicon-Free OCR

Having a reliable accuracy score is crucial for real world applications ...
research
09/04/2019

Are Adversarial Robustness and Common Perturbation Robustness Independant Attributes ?

Neural Networks have been shown to be sensitive to common perturbations ...
research
10/31/2020

Asymptotic Theory of Expectile Neural Networks

Neural networks are becoming an increasingly important tool in applicati...
research
10/03/2018

Neural Segmental Hypergraphs for Overlapping Mention Recognition

In this work, we propose a novel segmental hypergraph representation to ...

Please sign up or login with your details

Forgot password? Click here to reset