SpacePhish: The Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning

10/24/2022
by   Giovanni Apruzzese, et al.
0

Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual cost of the attack or the defense. Moreover, adversarial samples are often crafted in the "feature-space", making the corresponding evaluations of questionable value. Simply put, the current situation does not allow to estimate the actual threat posed by adversarial attacks, leading to a lack of secure ML systems. We aim to clarify such confusion in this paper. By considering the application of ML for Phishing Website Detection (PWD), we formalize the "evasion-space" in which an adversarial perturbation can be introduced to fool a ML-PWD – demonstrating that even perturbations in the "feature-space" are useful. Then, we propose a realistic threat model describing evasion attacks against ML-PWD that are cheap to stage, and hence intrinsically more attractive for real phishers. Finally, we perform the first statistically validated assessment of state-of-the-art ML-PWD against 12 evasion attacks. Our evaluation shows (i) the true efficacy of evasion attempts that are more likely to occur; and (ii) the impact of perturbations crafted in different evasion-spaces. Our realistic evasion attempts induce a statistically significant degradation (3-10 a subtle threat. Notably, however, some ML-PWD are immune to our most realistic attacks (p=0.22). Our contribution paves the way for a much needed re-assessment of adversarial attacks against ML systems for cybersecurity.

READ FULL TEXT

page 8

page 9

research
12/06/2021

ML Attack Models: Adversarial Attacks and Data Poisoning Attacks

Many state-of-the-art ML models have outperformed humans in various task...
research
06/03/2019

The Adversarial Machine Learning Conundrum: Can The Insecurity of ML Become The Achilles' Heel of Cognitive Networks?

The holy grail of networking is to create cognitive networks that organi...
research
03/03/2023

Adversarial Attacks on Machine Learning in Embedded and IoT Platforms

Machine learning (ML) algorithms are increasingly being integrated into ...
research
07/04/2022

Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples

Fifth Generation (5G) networks must support billions of heterogeneous de...
research
12/10/2020

Composite Adversarial Attacks

Adversarial attack is a technique for deceiving Machine Learning (ML) mo...
research
06/30/2021

Explanation-Guided Diagnosis of Machine Learning Evasion Attacks

Machine Learning (ML) models are susceptible to evasion attacks. Evasion...
research
09/04/2022

PhishClone: Measuring the Efficacy of Cloning Evasion Attacks

Web-based phishing accounts for over 90 web-browsers and security vendor...

Please sign up or login with your details

Forgot password? Click here to reset