Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection

06/25/2019
by   Kang Liu, et al.
1

There is substantial interest in the use of machine learning (ML) based techniques throughout the electronic computer-aided design (CAD) flow, particularly those based on deep learning. However, while deep learning methods have surpassed state-of-the-art performance in several applications, they have exhibited intrinsic susceptibility to adversarial perturbations --- small but deliberate alterations to the input of a neural network, precipitating incorrect predictions. In this paper, we seek to investigate whether adversarial perturbations pose risks to ML-based CAD tools, and if so, how these risks can be mitigated. To this end, we use a motivating case study of lithographic hotspot detection, for which convolutional neural networks (CNN) have shown great promise. In this context, we show the first adversarial perturbation attacks on state-of-the-art CNN-based hotspot detectors; specifically, we show that small (on average 0.5 preserving and design-constraint satisfying changes to a layout can nonetheless trick a CNN-based hotspot detector into predicting the modified layout as hotspot free (with up to 99.7 strategy to improve the robustness of CNN-based hotspot detection and show that this strategy significantly improves robustness (by a factor of 3) against adversarial attacks without compromising classification accuracy.

READ FULL TEXT

page 2

page 4

page 12

page 13

page 17

page 18

page 19

page 28

research
09/11/2017

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Deep learning has become the state of the art approach in many machine l...
research
05/30/2022

Searching for the Essence of Adversarial Perturbations

Neural networks have achieved the state-of-the-art performance in variou...
research
05/21/2019

DoPa: A Fast and Comprehensive CNN Defense Methodology against Physical Adversarial Attacks

Recently, Convolutional Neural Networks (CNNs) demonstrate a considerabl...
research
11/21/2020

A Neuro-Inspired Autoencoding Defense Against Adversarial Perturbations

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks: caref...
research
08/25/2018

Analysis of adversarial attacks against CNN-based image forgery detectors

With the ubiquitous diffusion of social networks, images are becoming a ...
research
03/24/2018

CNN Based Adversarial Embedding with Minimum Alteration for Image Steganography

Historically, steganographic schemes were designed in a way to preserve ...
research
02/23/2021

The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations

Authorship analysis is an important subject in the field of natural lang...

Please sign up or login with your details

Forgot password? Click here to reset