DeepAI AI Chat
Log In Sign Up

Defending Regression Learners Against Poisoning Attacks

by   Sandamal Weerasinghe, et al.

Regression models, which are widely used from engineering applications to financial forecasting, are vulnerable to targeted malicious attacks such as training data poisoning, through which adversaries can manipulate their predictions. Previous works that attempt to address this problem rely on assumptions about the nature of the attack/attacker or overestimate the knowledge of the learner, making them impractical. We introduce a novel Local Intrinsic Dimensionality (LID) based measure called N-LID that measures the local deviation of a given data point's LID with respect to its neighbors. We then show that N-LID can distinguish poisoned samples from normal samples and propose an N-LID based defense approach that makes no assumptions of the attacker. Through extensive numerical experiments with benchmark datasets, we show that the proposed defense mechanism outperforms the state of the art defenses in terms of prediction accuracy (up to 76 undefended ridge model) and running time.


Defending Distributed Classifiers Against Data Poisoning Attacks

Support Vector Machines (SVMs) are vulnerable to targeted training data ...

BagFlip: A Certified Defense against Data Poisoning

Machine learning models are vulnerable to data-poisoning attacks, in whi...

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

As machine learning becomes widely used for automated decisions, attacke...

Towards Fair Classification against Poisoning Attacks

Fair classification aims to stress the classification models to achieve ...

Bi-Level Poisoning Attack Model and Countermeasure for Appliance Consumption Data of Smart Homes

Accurate building energy prediction is useful in various applications st...

Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks

With the advances of ML models in recent years, we are seeing an increas...

Data Poisoning Attacks on Regression Learning and Corresponding Defenses

Adversarial data poisoning is an effective attack against machine learni...