A Dynamic-Adversarial Mining Approach to the Security of Machine Learning

03/24/2018
by   Tegjyot Singh Sethi, et al.
0

Operating in a dynamic real world environment requires a forward thinking and adversarial aware design for classifiers, beyond fitting the model to the training data. In such scenarios, it is necessary to make classifiers - a) harder to evade, b) easier to detect changes in the data distribution over time, and c) be able to retrain and recover from model degradation. While most works in the security of machine learning has concentrated on the evasion resistance (a) problem, there is little work in the areas of reacting to attacks (b and c). Additionally, while streaming data research concentrates on the ability to react to changes to the data distribution, they often take an adversarial agnostic view of the security problem. This makes them vulnerable to adversarial activity, which is aimed towards evading the concept drift detection mechanism itself. In this paper, we analyze the security of machine learning, from a dynamic and adversarial aware perspective. The existing techniques of Restrictive one class classifier models, Complex learning models and Randomization based ensembles, are shown to be myopic as they approach security as a static task. These methodologies are ill suited for a dynamic environment, as they leak excessive information to an adversary, who can subsequently launch attacks which are indistinguishable from the benign data. Based on empirical vulnerability analysis against a sophisticated adversary, a novel feature importance hiding approach for classifier design, is proposed. The proposed design ensures that future attacks on classifiers can be detected and recovered from. The proposed work presents motivation, by serving as a blueprint, for future work in the area of Dynamic-Adversarial mining, which combines lessons learned from Streaming data mining, Adversarial learning and Cybersecurity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2018

Handling Adversarial Concept Drift in Streaming Data

Classifiers operating in a dynamic, real world environment, are vulnerab...
research
06/21/2021

Graceful Degradation and Related Fields

When machine learning models encounter data which is out of the distribu...
research
03/14/2019

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks

In this article I describe a research agenda for securing machine learni...
research
03/23/2017

Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains

While modern day web applications aim to create impact at the civilizati...
research
03/24/2018

Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks

The increasing scale and sophistication of cyberattacks has led to the a...
research
05/15/2019

Ignorance-Aware Approaches and Algorithms for Prototype Selection in Machine Learning

Operating with ignorance is an important concern of the Machine Learning...
research
06/18/2019

Poisoning Attacks with Generative Adversarial Nets

Machine learning algorithms are vulnerable to poisoning attacks: An adve...

Please sign up or login with your details

Forgot password? Click here to reset