A Game-Theoretic Approach to Design Secure and Resilient Distributed Support Vector Machines

02/07/2018
by   Rui Zhang, et al.
0

Distributed Support Vector Machines (DSVM) have been developed to solve large-scale classification problems in networked systems with a large number of sensors and control units. However, the systems become more vulnerable as detection and defense are increasingly difficult and expensive. This work aims to develop secure and resilient DSVM algorithms under adversarial environments in which an attacker can manipulate the training data to achieve his objective. We establish a game-theoretic framework to capture the conflicting interests between an adversary and a set of distributed data processing units. The Nash equilibrium of the game allows predicting the outcome of learning algorithms in adversarial environments, and enhancing the resilience of the machine learning through dynamic distributed learning algorithms. We prove that the convergence of the distributed algorithm is guaranteed without assumptions on the training data or network topologies. Numerical experiments are conducted to corroborate the results. We show that network topology plays an important role in the security of DSVM. Networks with fewer nodes and higher average degrees are more secure. Moreover, a balanced network is found to be less vulnerable to attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2017

Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries

With a large number of sensors and control units in networked systems, d...
research
03/08/2020

Security of Distributed Machine Learning: A Game-Theoretic Approach to Design Secure DSVM

Distributed machine learning algorithms play a significant role in proce...
research
06/01/2022

Support Vector Machines under Adversarial Label Contamination

Machine learning algorithms are increasingly being applied in security-r...
research
10/13/2014

Fast Multilevel Support Vector Machines

Solving different types of optimization models (including parameters fit...
research
08/21/2020

Defending Distributed Classifiers Against Data Poisoning Attacks

Support Vector Machines (SVMs) are vulnerable to targeted training data ...
research
06/14/2020

Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach

Adversarial machine learning has attracted a great amount of attention i...
research
08/10/2017

Resilient Linear Classification: An Approach to Deal with Attacks on Training Data

Data-driven techniques are used in cyber-physical systems (CPS) for cont...

Please sign up or login with your details

Forgot password? Click here to reset