Semantic Adversarial Deep Learning

04/19/2018
by   Tommaso Dreossi, et al.
0

Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks, are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. However, existing approaches to generating adversarial examples and devising robust ML algorithms mostly ignore the semantics and context of the overall system containing the ML component. For example, in an autonomous vehicle using deep learning for perception, not every adversarial example for the neural network might lead to a harmful consequence. Moreover, one may want to prioritize the search for adversarial examples towards those that significantly modify the desired semantics of the overall system. Along the same lines, existing algorithms for constructing robust ML algorithms ignore the specification of the overall system. In this paper, we argue that the semantics and specification of the overall system has a crucial role to play in this line of research. We present preliminary research results that support this claim.

READ FULL TEXT
research
03/12/2020

ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems

Recent research demonstrated that the superficially well-trained machine...
research
10/02/2019

Generating Semantic Adversarial Examples with Differentiable Rendering

Machine learning (ML) algorithms, especially deep neural networks, have ...
research
03/21/2020

Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression

Learning-enabled components (LECs) are widely used in cyber-physical sys...
research
10/30/2020

Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks

To this date, CAPTCHAs have served as the first line of defense preventi...
research
09/26/2019

Adversarial Machine Learning Attack on Modulation Classification

Modulation classification is an important component of cognitive self-dr...
research
10/25/2022

A White-Box Adversarial Attack Against a Digital Twin

Recent research has shown that Machine Learning/Deep Learning (ML/DL) mo...
research
07/24/2023

Formal description of ML models for unambiguous implementation

Implementing deep neural networks in safety critical systems, in particu...

Please sign up or login with your details

Forgot password? Click here to reset