Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing

11/29/2022
by   Nataniel Ruiz, et al.
0

Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io

READ FULL TEXT

page 2

page 5

page 17

page 18

page 19

page 20

research
05/24/2017

Visual Servoing from Deep Neural Networks

We present a deep neural network-based method to perform high-precision,...
research
04/09/2021

SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size

Before deploying machine learning models it is critical to assess their ...
research
04/06/2023

LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis

Neural fields have recently enjoyed great success in representing and re...
research
09/09/2019

TDAPNet: Prototype Network with Recurrent Top-Down Attention for Robust Object Classification under Partial Occlusion

Despite deep convolutional neural networks' great success in object clas...
research
12/04/2015

What can we learn about CNNs from a large scale controlled object dataset?

Tolerance to image variations (e.g. translation, scale, pose, illuminati...
research
07/20/2022

On the Robustness of 3D Object Detectors

In recent years, significant progress has been achieved for 3D object de...
research
06/30/2021

Small in-distribution changes in 3D perspective and lighting fool both CNNs and Transformers

Neural networks are susceptible to small transformations including 2D ro...

Please sign up or login with your details

Forgot password? Click here to reset