A Backdoor Attack against 3D Point Cloud Classifiers

04/12/2021
by   Zhen Xiang, et al.
0

Vulnerability of 3D point cloud (PC) classifiers has become a grave concern due to the popularity of 3D sensors in safety-critical applications. Existing adversarial attacks against 3D PC classifiers are all test-time evasion (TTE) attacks that aim to induce test-time misclassifications using knowledge of the classifier. But since the victim classifier is usually not accessible to the attacker, the threat is largely diminished in practice, as PC TTEs typically have poor transferability. Here, we propose the first backdoor attack (BA) against PC classifiers. Originally proposed for images, BAs poison the victim classifier's training set so that the classifier learns to decide to the attacker's target class whenever the attacker's backdoor pattern is present in a given input sample. Significantly, BAs do not require knowledge of the victim classifier. Different from image BAs, we propose to insert a cluster of points into a PC as a robust backdoor pattern customized for 3D PCs. Such clusters are also consistent with a physical attack (i.e., with a captured object in a scene). We optimize the cluster's location using an independently trained surrogate classifier and choose the cluster's local geometry to evade possible PC preprocessing and PC anomaly detectors (ADs). Experimentally, our BA achieves a uniformly high success rate (> 87 state-of-the-art PC ADs.

READ FULL TEXT
10/20/2021

Detecting Backdoor Attacks Against Point Cloud Classifiers

Backdoor attacks (BA) are an emerging threat to deep neural network clas...
10/15/2020

Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing

Backdoor data poisoning is an emerging form of adversarial attack usuall...
04/03/2018

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Data poisoning is a type of adversarial attack on machine learning model...
05/03/2022

Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks

Deep learning (DL) is being used extensively for text classification. Ho...
03/23/2021

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

One of the most concerning threats for modern AI systems is data poisoni...
10/18/2020

Poisoned classifiers are not only backdoored, they are fundamentally broken

Under a commonly-studied "backdoor" poisoning attack against classificat...
06/06/2018

Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification

The wide usage of Machine Learning (ML) has lead to research on the atta...