Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

04/16/2021
by   Arjun Nitin Bhagoji, et al.
8

Understanding the fundamental limits of robust supervised learning has emerged as a problem of immense interest, from both practical and theoretical standpoints. In particular, it is critical to determine classifier-agnostic bounds on the training loss to establish when learning is possible. In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs. Our formulation of the bound as a solution to an optimization problem is general enough to encompass any loss function depending on soft classifier outputs. We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently, allowing us to determine lower bounds for multiple practical datasets of interest. We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets. Finally, we investigate the possibility of using of optimal classification outputs as soft labels to empirically improve robust training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker

Finding classifiers robust to adversarial examples is critical for their...
research
09/26/2019

Lower Bounds on Adversarial Robustness from Optimal Transport

While progress has been made in understanding the robustness of machine ...
research
02/24/2020

Lower bounds for prams over Z

This paper presents a new abstract method for proving lower bounds in co...
research
10/23/2021

Signal to Noise Ratio Loss Function

This work proposes a new loss function targeting classification problems...
research
07/20/2022

Test-Time Adaptation via Conjugate Pseudo-labels

Test-time adaptation (TTA) refers to adapting neural networks to distrib...
research
10/03/2022

Language-Aware Soft Prompting for Vision Language Foundation Models

This paper is on soft prompt learning for Vision & Language (V L) mode...
research
11/03/2020

Loss Bounds for Approximate Influence-Based Abstraction

Sequential decision making techniques hold great promise to improve the ...

Please sign up or login with your details

Forgot password? Click here to reset