Certifying Out-of-Domain Generalization for Blackbox Functions

02/03/2022
by   Maurice Weber, et al.
7

Certifying the robustness of model performance under bounded data distribution shifts has recently attracted intensive interests under the umbrella of distributional robustness. However, existing techniques either make strong assumptions on the model class and loss functions that can be certified, such as smoothness expressed via Lipschitz continuity of gradients, or require to solve complex optimization problems. As a result, the wider application of these techniques is currently limited by its scalability and flexibility – these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth, such as the 0-1 loss. In this paper, we focus on the problem of certifying distributional robustness for black box models and bounded losses, without other assumptions. We propose a novel certification framework given bounded distance of mean and variance of two distributions. Our certification technique scales to ImageNet-scale datasets, complex models, and a diverse range of loss functions. We then focus on one specific application enabled by such scalability and flexibility, i.e., certifying out-of-domain generalization for large neural networks and loss functions such as accuracy and AUC. We experimentally validate our certification method on a number of datasets, ranging from ImageNet, where we provide the first non-vacuous certified out-of-domain generalization, to smaller classification tasks where we are able to compare with the state-of-the-art and show that our method performs considerably better.

READ FULL TEXT
research
06/19/2020

A general framework for defining and optimizing robustness

Robustness of neural networks has recently attracted a great amount of i...
research
02/10/2021

Stability of SGD: Tightness Analysis and Improved Bounds

Stochastic Gradient Descent (SGD) based methods have been widely used fo...
research
03/21/2023

Using Explanations to Guide Models

Deep neural networks are highly performant, but might base their decisio...
research
07/23/2020

Adma: A Flexible Loss Function for Neural Networks

Highly increased interest in Artificial Neural Networks (ANNs) have resu...
research
03/01/2022

Global-Local Regularization Via Distributional Robustness

Despite superior performance in many situations, deep neural networks ar...
research
02/24/2023

Generalization Analysis for Contrastive Representation Learning

Recently, contrastive learning has found impressive success in advancing...
research
06/21/2022

A consistent and flexible framework for deep matrix factorizations

Deep matrix factorizations (deep MFs) are recent unsupervised data minin...

Please sign up or login with your details

Forgot password? Click here to reset