
-
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Driven by massive amounts of data and important advances in computationa...
read it
-
Neural Anisotropy Directions
In this work, we analyze the role of the network architecture in shaping...
read it
-
GeoDA: a geometric framework for black-box adversarial attacks
Adversarial examples are known as carefully perturbed images fooling ima...
read it
-
Hold me tight! Influence of discriminative features on deep network boundaries
Important insights towards the explainability of neural networks and the...
read it
-
A geometry-inspired decision-based attack
Deep neural networks have recently achieved tremendous success in image ...
read it
-
Robustness via curvature regularization, and vice versa
State-of-the-art classifiers have been shown to be largely vulnerable to...
read it
-
SparseFool: a few pixels make a big difference
Deep Neural Networks have achieved extraordinary results on image classi...
read it
-
Divide, Denoise, and Defend against Adversarial Attacks
Deep neural networks, although shown to be a successful class of machine...
read it
-
Adaptive Quantization for Deep Neural Network
In recent years Deep Neural Networks (DNNs) have been rapidly developed ...
read it
-
Geometric robustness of deep networks: analysis and improvement
Deep convolutional neural networks have been shown to be vulnerable to a...
read it
-
Analysis of universal adversarial perturbations
Deep networks have recently been shown to be vulnerable to universal per...
read it
-
Classification regions of deep neural networks
The goal of this paper is to analyze the geometric properties of deep ne...
read it
-
Universal adversarial perturbations
Given a state-of-the-art deep neural network classifier, we show the exi...
read it
-
Robustness of classifiers: from adversarial to random noise
Several recent works have shown that state-of-the-art classifiers are vu...
read it