On the Security Risks of AutoML

10/12/2021
by   Ren Pang, et al.
17

Neural Architecture Search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for models tailored to given tasks, which greatly simplifies the development of ML systems and propels the trend of ML democratization. Yet, little is known about the potential security risks incurred by NAS, which is concerning given the increasing use of NAS-generated models in critical domains. This work represents a solid initial step towards bridging the gap. Through an extensive empirical study of 10 popular NAS methods, we show that compared with their manually designed counterparts, NAS-generated models tend to suffer greater vulnerability to various malicious attacks (e.g., adversarial evasion, model poisoning, and functionality stealing). Further, with both empirical and analytical evidence, we provide possible explanations for such phenomena: given the prohibitive search space and training cost, most NAS methods favor models that converge fast at early training stages; this preference results in architectural properties associated with attack vulnerability (e.g., high loss smoothness and low gradient variance). Our findings not only reveal the relationships between model characteristics and attack vulnerability but also suggest the inherent connections underlying different attacks. Finally, we discuss potential remedies to mitigate such drawbacks, including increasing cell depth and suppressing skip connects, which lead to several promising research directions.

READ FULL TEXT

page 1

page 3

page 4

page 12

page 13

page 16

page 17

page 18

research
04/06/2023

Robust Neural Architecture Search

Neural Architectures Search (NAS) becomes more and more popular over the...
research
10/21/2022

Neural Architectural Backdoors

This paper asks the intriguing question: is it possible to exploit neura...
research
06/04/2021

Event Classification with Multi-step Machine Learning

The usefulness and value of Multi-step Machine Learning (ML), where a ta...
research
08/17/2022

ObfuNAS: A Neural Architecture Search-based DNN Obfuscation Approach

Malicious architecture extraction has been emerging as a crucial concern...
research
03/07/2021

Efficient Model Performance Estimation via Feature Histories

An important step in the task of neural network design, such as hyper-pa...
research
05/03/2023

On the Security Risks of Knowledge Graph Reasoning

Knowledge graph reasoning (KGR) – answering complex logical queries over...
research
12/02/2018

Model-Reuse Attacks on Deep Learning Systems

Many of today's machine learning (ML) systems are built by reusing an ar...

Please sign up or login with your details

Forgot password? Click here to reset