An Evasion Attack against ML-based Phishing URL Detectors

05/18/2020
by   Bushra Sabir, et al.
0

Background: Over the year, Machine Learning Phishing URL classification (MLPU) systems have gained tremendous popularity to detect phishing URLs proactively. Despite this vogue, the security vulnerabilities of MLPUs remain mostly unknown. Aim: To address this concern, we conduct a study to understand the test time security vulnerabilities of the state-of-the-art MLPU systems, aiming at providing guidelines for the future development of these systems. Method: In this paper, we propose an evasion attack framework against MLPU systems. To achieve this, we first develop an algorithm to generate adversarial phishing URLs. We then reproduce 41 MLPU systems and record their baseline performance. Finally, we simulate an evasion attack to evaluate these MLPU systems against our generated adversarial URLs. Results: In comparison to previous works, our attack is: (i) effective as it evades all the models with an average success rate of 66 less popular phishing targets (e.g., Wish, JBHIFI, Officeworks) respectively; (ii) realistic as it requires only 23ms to produce a new adversarial URL variant that is available for registration with a median cost of only 11.99/year. We also found that popular online services such as Google SafeBrowsing and VirusTotal are unable to detect these URLs. (iii) We find that Adversarial training (successful defence against evasion attack) does not significantly improve the robustness of these systems as it decreases the success rate of our attack by only 6 Further, we identify the security vulnerabilities of the considered MLPU systems. Our findings lead to promising directions for future research. Conclusion: Our study not only illustrate vulnerabilities in MLPU systems but also highlights implications for future study towards assessing and improving these systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2018

TextBugger: Generating Adversarial Text Against Real-world Applications

Deep Learning-based Text Understanding (DLTU) is the backbone technique ...
research
04/24/2021

A Review on C3I Systems' Security: Vulnerabilities, Attacks, and Countermeasures

Command, Control, Communication, and Intelligence (C3I) system is a kind...
research
03/21/2022

A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement

Recent years have seen the wide application of NLP models in crucial are...
research
03/13/2022

Generating Practical Adversarial Network Traffic Flows Using NIDSGAN

Network intrusion detection systems (NIDS) are an essential defense for ...
research
02/22/2020

Temporal Sparse Adversarial Attack on Gait Recognition

Gait recognition has a broad application in social security due to its a...
research
12/20/2022

Learned Systems Security

A learned system uses machine learning (ML) internally to improve perfor...
research
02/22/2022

Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

Facial Liveness Verification (FLV) is widely used for identity authentic...

Please sign up or login with your details

Forgot password? Click here to reset