Adversarial machine learning for protecting against online manipulation

11/23/2021
by   Stefano Cresci, et al.
0

Adversarial examples are inputs to a machine learning system that result in an incorrect output from that system. Attacks launched through this type of input can cause severe consequences: for example, in the field of image recognition, a stop signal can be misclassified as a speed limit indication.However, adversarial examples also represent the fuel for a flurry of research directions in different domains and applications. Here, we give an overview of how they can be profitably exploited as powerful tools to build stronger learning models, capable of better-withstanding attacks, for two crucial tasks: fake news and social bot detection.

READ FULL TEXT
research
04/23/2020

Adversarial Machine Learning in Network Intrusion Detection Systems

Adversarial examples are inputs to a machine learning system intentional...
research
07/09/2018

Adaptive Adversarial Attack on Scene Text Recognition

Recent studies have shown that state-of-the-art deep learning models are...
research
11/25/2019

Playing it Safe: Adversarial Robustness with an Abstain Option

We explore adversarial robustness in the setting in which it is acceptab...
research
03/17/2023

It Is All About Data: A Survey on the Effects of Data on Adversarial Robustness

Adversarial examples are inputs to machine learning models that an attac...
research
10/03/2016

cleverhans v2.0.0: an adversarial machine learning library

cleverhans is a software library that provides standardized reference im...
research
08/03/2023

URET: Universal Robustness Evaluation Toolkit (for Evasion)

Machine learning models are known to be vulnerable to adversarial evasio...
research
11/02/2020

Adversarial Examples in Constrained Domains

Machine learning algorithms have been shown to be vulnerable to adversar...

Please sign up or login with your details

Forgot password? Click here to reset