DeepAI AI Chat
Log In Sign Up

Adversarial Geometry and Lighting using a Differentiable Renderer

08/08/2018
by   Hsueh-Ti Derek Liu, et al.
UNIVERSITY OF TORONTO
Carnegie Mellon University
McGill University
2

Many machine learning classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Modern adversarial methods either directly alter pixel colors, or "paint" colors onto a 3D shapes. We propose novel adversarial attacks that directly alter the geometry of 3D objects and/or manipulate the lighting in a virtual scene. We leverage a novel differentiable renderer that is efficient to evaluate and analytically differentiate. Our renderer generates images realistic enough for correct classification by common pre-trained models, and we use it to design physical adversarial examples that consistently fool these models. We conduct qualitative and quantitate experiments to validate our adversarial geometry and adversarial lighting attack capabilities.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

08/08/2018

Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer

Many machine learning image classifiers are vulnerable to adversarial at...
05/20/2022

Enriching StyleGAN with Illumination Physics

StyleGAN generates novel images of a scene from latent codes which are i...
05/29/2019

Functional Adversarial Attacks

We propose functional adversarial attacks, a novel class of threat model...
02/28/2020

Utilizing Network Properties to Detect Erroneous Inputs

Neural networks are vulnerable to a wide range of erroneous inputs such ...
05/19/2022

Focused Adversarial Attacks

Recent advances in machine learning show that neural models are vulnerab...
03/30/2022

Face Relighting with Geometrically Consistent Shadows

Most face relighting methods are able to handle diffuse shadows, but str...
10/16/2018

Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers

This work demonstrates a physical attack on a deep learning image classi...