Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities

01/27/2022
by   Xin Du, et al.
3

Using large pre-trained models for image recognition tasks is becoming increasingly common owing to the well acknowledged success of recent models like vision transformers and other CNN-based models like VGG and Resnet. The high accuracy of these models on benchmark tasks has translated into their practical use across many domains including safety-critical applications like autonomous driving and medical diagnostics. Despite their widespread use, image models have been shown to be fragile to changes in the operating environment, bringing their robustness into question. There is an urgent need for methods that systematically characterise and quantify the capabilities of these models to help designers understand and provide guarantees about their safety and robustness. In this paper, we propose Vision Checklist, a framework aimed at interrogating the capabilities of a model in order to produce a report that can be used by a system designer for robustness evaluations. This framework proposes a set of perturbation operations that can be applied on the underlying data to generate test samples of different types. The perturbations reflect potential changes in operating environments, and interrogate various properties ranging from the strictly quantitative to more qualitative. Our framework is evaluated on multiple datasets like Tinyimagenet, CIFAR10, CIFAR100 and Camelyon17 and for models like ViT and Resnet. Our Vision Checklist proposes a specific set of evaluations that can be integrated into the previously proposed concept of a model card. Robustness evaluations like our checklist will be crucial in future safety evaluations of visual perception modules, and be useful for a wide range of stakeholders including designers, deployers, and regulators involved in the certification of these systems. Source code of Vision Checklist would be open for public use.

READ FULL TEXT

page 3

page 8

page 13

page 14

research
11/01/2022

Exploring Effects of Computational Parameter Changes to Image Recognition Systems

Image recognition tasks typically use deep learning and require enormous...
research
06/05/2023

A Differential Testing Framework to Evaluate Image Recognition Model Robustness

Image recognition tasks typically use deep learning and require enormous...
research
05/10/2023

The Robustness of Computer Vision Models against Common Corruptions: a Survey

The performance of computer vision models is susceptible to unexpected c...
research
04/05/2023

Training Strategies for Vision Transformers for Object Detection

Vision-based Transformer have shown huge application in the perception m...
research
09/19/2020

Efficient Certification of Spatial Robustness

Recent work has exposed the vulnerability of computer vision models to s...
research
05/14/2018

Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks

Recently, Deep Learning (DL), especially Convolutional Neural Network (C...
research
04/13/2023

RoboBEV: Towards Robust Bird's Eye View Perception under Corruptions

The recent advances in camera-based bird's eye view (BEV) representation...

Please sign up or login with your details

Forgot password? Click here to reset