Efficient Certification of Spatial Robustness

09/19/2020
by   Anian Ruoss, et al.
0

Recent work has exposed the vulnerability of computer vision models to spatial transformations. Due to the widespread usage of such models in safety-critical applications, it is crucial to quantify their robustness against spatial transformations. However, existing work only provides empirical quantification of spatial robustness via adversarial attacks, which lack provable guarantees. In this work, we propose novel convex relaxations, which enable us, for the first time, to provide a certificate of robustness against spatial transformations. Our convex relaxations are model-agnostic and can be leveraged by a wide range of neural network verifiers. Experiments on several network architectures and different datasets demonstrate the effectiveness and scalability of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2022

TPC: Transformation-Specific Smoothing for Point Cloud Models

Point cloud models with neural network architectures have achieved great...
research
05/17/2019

POPQORN: Quantifying Robustness of Recurrent Neural Networks

The vulnerability to adversarial attacks has been a critical issue for d...
research
09/22/2021

CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks

In safety-critical machine learning applications, it is crucial to defen...
research
03/30/2021

Robustness Certification for Point Cloud Models

The use of deep 3D point cloud models in safety-critical applications, s...
research
07/05/2023

GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations

Deep neural networks tend to make overconfident predictions and often re...
research
01/27/2022

Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities

Using large pre-trained models for image recognition tasks is becoming i...
research
12/07/2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Recent work has shown that neural network-based vision classifiers exhib...

Please sign up or login with your details

Forgot password? Click here to reset