A Marauder's Map of Security and Privacy in Machine Learning

11/03/2018
by   Nicolas Papernot, et al.
0

There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited but expanding. In this talk, we explore the threat model space of ML algorithms through the lens of Saltzer and Schroeder's principles for the design of secure computer systems. This characterization of the threat space prompts an investigation of current and future research directions. We structure our discussion around three of these directions, which we believe are likely to lead to significant progress. The first encompasses a spectrum of approaches to verification and admission control, which is a prerequisite to enable fail-safe defaults in machine learning systems. The second seeks to design mechanisms for assembling reliable records of compromise that would help understand the degree to which vulnerabilities are exploited by adversaries, as well as favor psychological acceptability of machine learning applications. The third pursues formal frameworks for security and privacy in machine learning, which we argue should strive to align machine learning goals such as generalization with security and privacy desiderata like robustness or privacy. Key insights resulting from these three directions pursued both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. We conclude by systematizing best practices in our community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/12/2022

Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges

The rapid development of Machine Learning (ML) has demonstrated superior...
research
01/08/2021

Towards a Robust and Trustworthy Machine Learning System Development

Machine Learning (ML) technologies have been widely adopted in many miss...
research
02/10/2020

Security Privacy in IoT Using Machine Learning Blockchain: Threats Countermeasures

Security and privacy have become significant concerns due to the involve...
research
01/22/2021

On managing vulnerabilities in AI/ML systems

This paper explores how the current paradigm of vulnerability management...
research
07/20/2023

Deceptive Alignment Monitoring

As the capabilities of large machine learning models continue to grow, a...
research
12/15/2020

Confidential Machine Learning on Untrusted Platforms: A Survey

With ever-growing data and the need for developing powerful machine lear...
research
01/24/2020

When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions

Wireless systems are vulnerable to various attacks such as jamming and e...

Please sign up or login with your details

Forgot password? Click here to reset