Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors Against Backdooring Attacks

04/26/2020
by   Kang Liu, et al.
1

Deep learning (DL) offers potential improvements throughout the CAD tool-flow, one promising application being lithographic hotspot detection. However, DL techniques have been shown to be especially vulnerable to inference and training time adversarial attacks. Recent work has demonstrated that a small fraction of malicious physical designers can stealthily "backdoor" a DL-based hotspot detector during its training phase such that it accurately classifies regular layout clips but predicts hotspots containing a specially crafted trigger shape as non-hotspots. We propose a novel training data augmentation strategy as a powerful defense against such backdooring attacks. The defense works by eliminating the intentional biases introduced in the training data but does not require knowledge of which training samples are poisoned or the nature of the backdoor trigger. Our results show that the defense can drastically reduce the attack success rate from 84

READ FULL TEXT

page 1

page 11

page 12

research
08/23/2018

Adversarial Attacks on Deep-Learning Based Radio Signal Classification

Deep learning (DL), despite its enormous success in many computer vision...
research
07/13/2021

Thinkback: Task-SpecificOut-of-Distribution Detection

The increased success of Deep Learning (DL) has recently sparked large-s...
research
12/13/2020

DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

Public resources and services (e.g., datasets, training platforms, pre-t...
research
01/06/2021

DeepPoison: Feature Transfer Based Stealthy Poisoning Attack

Deep neural networks are susceptible to poisoning attacks by purposely p...
research
10/16/2020

Exploiting Vulnerabilities of Deep Learning-based Energy Theft Detection in AMI through Adversarial Attacks

Effective detection of energy theft can prevent revenue losses of utilit...
research
08/18/2023

Attacking logo-based phishing website detectors with adversarial perturbations

Recent times have witnessed the rise of anti-phishing schemes powered by...
research
05/03/2022

Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks

Deep learning (DL) is being used extensively for text classification. Ho...

Please sign up or login with your details

Forgot password? Click here to reset