Fooling Computer Vision into Inferring the Wrong Body Mass Index

05/16/2019
by   Owen Levin, et al.
0

Recently it's been shown that neural networks can use images of human faces to accurately predict Body Mass Index (BMI), a widely used health indicator. In this paper we demonstrate that a neural network performing BMI inference is indeed vulnerable to test-time adversarial attacks. This extends test-time adversarial attacks from classification tasks to regression. The application we highlight is BMI inference in the insurance industry, where such adversarial attacks imply a danger of insurance fraud.

READ FULL TEXT
research
07/20/2020

Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks

Though deep neural networks (DNNs) have shown superiority over other tec...
research
12/07/2018

Adversarial Attacks, Regression, and Numerical Stability Regularization

Adversarial attacks against neural networks in a regression setting are ...
research
08/21/2018

Are You Tampering With My Data?

We propose a novel approach towards adversarial attacks on neural networ...
research
11/11/2020

Adversarial images for the primate brain

Deep artificial neural networks have been proposed as a model of primate...
research
11/10/2022

Test-time adversarial detection and robustness for localizing humans using ultra wide band channel impulse responses

Keyless entry systems in cars are adopting neural networks for localizin...
research
03/30/2023

Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness

Neural networks have been proven to be both highly effective within comp...
research
08/20/2021

Detecting and Segmenting Adversarial Graphics Patterns from Images

Adversarial attacks pose a substantial threat to computer vision system ...

Please sign up or login with your details

Forgot password? Click here to reset