Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges

09/17/2019
by   Jinyuan Jia, et al.
0

As machine learning (ML) becomes more and more powerful and easily accessible, attackers increasingly leverage ML to perform automated large-scale inference attacks in various domains. In such an ML-equipped inference attack, an attacker has access to some data (called public data) of an individual, a software, or a system; and the attacker uses an ML classifier to automatically infer their private data. Inference attacks pose severe privacy and security threats to individuals and systems. Inference attacks are successful because private data are statistically correlated with public data, and ML classifiers can capture such statistical correlations. In this chapter, we discuss the opportunities and challenges of defending against ML-equipped inference attacks via adversarial examples. Our key observation is that attackers rely on ML classifiers in inference attacks. The adversarial machine learning community has demonstrated that ML classifiers have various vulnerabilities. Therefore, we can turn the vulnerabilities of ML into defenses against inference attacks. For example, ML classifiers are vulnerable to adversarial examples, which add carefully crafted noise to normal examples such that an ML classifier makes predictions for the examples as we desire. To defend against inference attacks, we can add carefully crafted noise into the public data to turn them into adversarial examples, such that attackers' classifiers make incorrect predictions for the private data. However, existing methods to construct adversarial examples are insufficient because they did not consider the unique challenges and requirements for the crafted noise at defending against inference attacks. In this chapter, we take defending against inference attacks in online social networks as an example to illustrate the opportunities and challenges.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2019

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

In a membership inference attack, an attacker aims to infer whether a da...
research
06/24/2023

Machine Learning needs its own Randomness Standard: Randomised Smoothing and PRNG-based attacks

Randomness supports many critical functions in the field of machine lear...
research
09/03/2021

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

In this work, we show how to jointly exploit adversarial perturbation an...
research
10/30/2020

Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks

To this date, CAPTCHAs have served as the first line of defense preventi...
research
05/13/2018

AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning

Users in various web and mobile applications are vulnerable to attribute...
research
01/10/2019

Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification

The phenomenon of Adversarial Examples is attracting increasing interest...
research
11/04/2018

FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

Deep neural networks (DNN)-based machine learning (ML) algorithms have r...

Please sign up or login with your details

Forgot password? Click here to reset