DeepAI AI Chat
Log In Sign Up

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

08/25/2017
by   Xinyang Zhang, et al.
0

Many of today's machine learning (ML) systems are not built from scratch, but are compositions of an array of modular learning components (MLCs). The increasing use of MLCs significantly simplifies the ML system development cycles. However, as most MLCs are contributed and maintained by third parties, their lack of standardization and regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful MLCs pose immense threats to the security of ML systems. We present a broad class of logic-bomb attacks in which maliciously crafted MLCs trigger host systems to malfunction in a predictable manner. By empirically studying two state-of-the-art ML systems in the healthcare domain, we explore the feasibility of such attacks. For example, we show that, without prior knowledge about the host ML system, by modifying only 3.3 of the MLC's parameters, each with distortion below 10^-3, the adversary is able to force the misdiagnosis of target victims' skin cancers with 100% success rate. We provide analytical justification for the success of such attacks, which points to the fundamental characteristics of today's ML models: high dimensionality, non-linearity, and non-convexity. The issue thus seems fundamental to many ML systems. We further discuss potential countermeasures to mitigate MLC-based attacks and their potential technical challenges.

READ FULL TEXT
12/02/2018

Model-Reuse Attacks on Deep Learning Systems

Many of today's machine learning (ML) systems are built by reusing an ar...
08/01/2020

Trojaning Language Models for Fun and Profit

Recent years have witnessed a new paradigm of building natural language ...
06/24/2020

Subpopulation Data Poisoning Attacks

Machine learning (ML) systems are deployed in critical settings, but the...
02/03/2023

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation

Side-channel attacks that use machine learning (ML) for signal analysis ...
12/20/2022

Learned Systems Security

A learned system uses machine learning (ML) internally to improve perfor...
04/24/2020

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...
08/06/2019

Model Agnostic Defence against Backdoor Attacks in Machine Learning

Machine Learning (ML) has automated a multitude of our day-to-day decisi...