DeepAI AI Chat
Log In Sign Up

Modular Learning Component Attacks: Today's Reality, Tomorrow's Challenge

by   Xinyang Zhang, et al.

Many of today's machine learning (ML) systems are not built from scratch, but are compositions of an array of modular learning components (MLCs). The increasing use of MLCs significantly simplifies the ML system development cycles. However, as most MLCs are contributed and maintained by third parties, their lack of standardization and regulation entails profound security implications. In this paper, for the first time, we demonstrate that potentially harmful MLCs pose immense threats to the security of ML systems. We present a broad class of logic-bomb attacks in which maliciously crafted MLCs trigger host systems to malfunction in a predictable manner. By empirically studying two state-of-the-art ML systems in the healthcare domain, we explore the feasibility of such attacks. For example, we show that, without prior knowledge about the host ML system, by modifying only 3.3 of the MLC's parameters, each with distortion below 10^-3, the adversary is able to force the misdiagnosis of target victims' skin cancers with 100% success rate. We provide analytical justification for the success of such attacks, which points to the fundamental characteristics of today's ML models: high dimensionality, non-linearity, and non-convexity. The issue thus seems fundamental to many ML systems. We further discuss potential countermeasures to mitigate MLC-based attacks and their potential technical challenges.


Model-Reuse Attacks on Deep Learning Systems

Many of today's machine learning (ML) systems are built by reusing an ar...

Trojaning Language Models for Fun and Profit

Recent years have witnessed a new paradigm of building natural language ...

Subpopulation Data Poisoning Attacks

Machine learning (ML) systems are deployed in critical settings, but the...

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation

Side-channel attacks that use machine learning (ML) for signal analysis ...

Learned Systems Security

A learned system uses machine learning (ML) internally to improve perfor...

Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers

Backdoor data poisoning attacks have recently been demonstrated in compu...

Model Agnostic Defence against Backdoor Attacks in Machine Learning

Machine Learning (ML) has automated a multitude of our day-to-day decisi...