Adversarial Attack Type I: Generating False Positives

09/03/2018
by   Sanli Tang, et al.
0

False positive and false negative rates are equally important for evaluating the performance of a classifier. Adversarial examples by increasing false negative rate have been studied in recent years. However, harming a classifier by increasing false positive rate is almost blank, since it is much more difficult to generate a new and meaningful positive than the negative. To generate false positives, a supervised generative framework is proposed in this paper. Experiment results show that our method is practical and effective to generate those adversarial examples on large-scale image datasets.

READ FULL TEXT
research
07/06/2021

Telescoping Filter: A Practical Adaptive Filter

Filters are fast, small and approximate set membership data structures. ...
research
04/21/2020

PhishOut: Effective Phishing Detection Using Selected Features

Phishing emails are the first step for many of today's attacks. They com...
research
08/30/2021

An Enhanced Machine Learning Topic Classification Methodology for Cybersecurity

In this research, we use user defined labels from three internet text so...
research
04/18/2022

AB/BA analysis: A framework for estimating keyword spotting recall improvement while maintaining audio privacy

Evaluation of keyword spotting (KWS) systems that detect keywords in spe...
research
02/13/2023

That Escalated Quickly: An ML Framework for Alert Prioritization

In place of in-house solutions, organizations are increasingly moving to...
research
02/24/2018

Toward an Evidence-based Design for Reactive Security Policies and Mechanisms

As malware, exploits, and cyber-attacks advance over time, so do the mit...
research
03/31/2023

Ranking Regularization for Critical Rare Classes: Minimizing False Positives at a High True Positive Rate

In many real-world settings, the critical class is rare and a missed det...

Please sign up or login with your details

Forgot password? Click here to reset