Adversarial Attacks and Defences Competition

03/31/2018
by   Alexey Kurakin, et al.
0

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the structure and organization of the competition and the solutions developed by several of the top-placing teams.

READ FULL TEXT
research
11/08/2019

Imperceptible Adversarial Attacks on Tabular Data

Security of machine learning models is a concern as they may face advers...
research
08/06/2018

Adversarial Vision Challenge

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitat...
research
09/29/2018

CAAD 2018: Generating Transferable Adversarial Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples, pert...
research
10/15/2021

Adversarial Attacks on ML Defense Models Competition

Due to the vulnerability of deep neural networks (DNNs) to adversarial e...
research
11/02/2022

Certified Robustness of Quantum Classifiers against Adversarial Examples through Quantum Noise

Recently, quantum classifiers have been known to be vulnerable to advers...
research
10/06/2021

amsqr at MLSEC-2021: Thwarting Adversarial Malware Evasion with a Defense-in-Depth

This paper describes the author's participation in the 3rd edition of th...
research
05/15/2020

Technologies and Workflow of Creative Coding Projects: Examples from the Google DevArt Competition

Recently, many artists and creative technologists created computer progr...

Please sign up or login with your details

Forgot password? Click here to reset