Open Set Adversarial Examples

09/07/2018
by   Zhedong Zheng, et al.
0

Adversarial examples in recent works target at closed set recognition systems, in which the training and testing classes are identical. In real-world scenarios, however, the testing classes may have limited, if any, overlap with the training classes, a problem named open set recognition. To our knowledge, the community does not have a specific design of adversarial examples targeting at this practical setting. Arguably, the new setting compromises traditional closed set attack methods in two aspects. First, closed set attack methods are based on classification and target at classification as well, but the open set problem suggests a different task, i.e., retrieval. It is undesirable that the generation mechanism of closed set recognition is different from the aim of open set recognition. Second, given that the query image is usually of an unseen class, predicting its category from the training classes is not reasonable, which leads to an inferior adversarial gradient. In this work, we view open set recognition as a retrieval task and propose a new approach, Opposite-Direction Feature Attack (ODFA), to generate adversarial examples / queries. When using an attacked example as query, we aim that the true matches be ranked as low as possible. In addressing the two limitations of closed set attack methods, ODFA directly works on the features for retrieval. The idea is to push away the feature of the adversarial query in the opposite direction of the original feature. Albeit simple, ODFA leads to a larger drop in Recall@K and mAP than the close-set attack methods on two open set recognition datasets, i.e., Market-1501 and CUB-200-2011. We also demonstrate that the attack performance of ODFA is not evidently superior to the state-of-the-art methods under closed set recognition (Cifar-10), suggesting its specificity for open set problems.

READ FULL TEXT
research
11/22/2018

Distorting Neural Representations to Generate Highly Transferable Adversarial Examples

Deep neural networks (DNN) can be easily fooled by adding human impercep...
research
10/30/2017

Open Set Logo Detection and Retrieval

Current logo retrieval research focuses on closed set scenarios. We argu...
research
03/10/2017

A New Evaluation Protocol and Benchmarking Results for Extendable Cross-media Retrieval

This paper proposes a new evaluation protocol for cross-media retrieval ...
research
11/10/2022

Open-Set Automatic Target Recognition

Automatic Target Recognition (ATR) is a category of computer vision algo...
research
04/07/2021

OpenGAN: Open-Set Recognition via Open Data Generation

Real-world machine learning systems need to analyze novel testing data t...
research
10/12/2021

Open-Set Recognition: A Good Closed-Set Classifier is All You Need

The ability to identify whether or not a test sample belongs to one of t...
research
12/11/2020

Random Projections for Adversarial Attack Detection

Whilst adversarial attack detection has received considerable attention,...

Please sign up or login with your details

Forgot password? Click here to reset