research
∙
07/05/2021
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Reliable deployment of machine learning models such as neural networks c...
research
∙
12/28/2020
Analysis of Dominant Classes in Universal Adversarial Perturbations
The reasons why Deep Neural Networks are susceptible to being fooled by ...
research
∙
04/14/2020
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
Despite the remarkable performance and generalization levels of deep lea...
research
∙
01/23/2020
On the human evaluation of audio adversarial examples
Human-machine interaction is increasingly dependent on speech communicat...
research
∙
11/22/2019