
Mind the box: l_1APGD for sparse adversarial attacks on image classifiers
We show that when taking into account also the image domain [0,1]^d, est...
read it

Outdistribution aware Selftraining in an Open World Setting
Deep Learning heavily depends on large labeled datasets which limits fur...
read it

RobustBench: a standardized adversarial robustness benchmark
Evaluation of adversarial robustness is often errorprone leading to ove...
read it

Learnable Uncertainty under Laplace Approximations
Laplace approximations are classic, computationally lightweight means fo...
read it

Fixing Asymptotic Uncertainty of Bayesian Neural Networks with Infinite ReLU Features
Approximate Bayesian methods can mitigate overconfidence in ReLU network...
read it

Provable Worst Case Guarantees for the Detection of OutofDistribution Data
Deep neural networks are known to be overconfident when applied to outo...
read it

Bit Error Robustness for EnergyEfficient DNN Accelerators
Deep neural network (DNN) accelerators received considerable attention i...
read it

SparseRS: a versatile framework for queryefficient sparse blackbox adversarial attacks
A large body of research has focused on adversarial attacks which requir...
read it

Adversarial Robustness on In and OutDistribution Improves Explainability
Neural networks have led to major improvements in image classification b...
read it

Reliable evaluation of adversarial robustness with an ensemble of diverse parameterfree attacks
The field of defense strategies against adversarial attacks has signific...
read it

Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks
The point estimates of ReLU classification networks—arguably the most wi...
read it

Computing the norm of nonnegative matrices and the logSobolev constant of Markov chains
We analyze the global convergence of the power iterates for the computat...
read it

Square Attack: a queryefficient blackbox adversarial attack via random search
We propose the Square Attack, a new scorebased blackbox l_2 and l_∞ ad...
read it

Generalized Matrix Means for SemiSupervised Learning with Multilayer Graphs
We study the task of semisupervised learning on multilayer graphs by ta...
read it

ConfidenceCalibrated Adversarial Training: Towards Robust Models Generalizing Beyond the Attack Used During Training
Adversarial training is the standard to train models robust against adve...
read it

Towards neural networks that provably know when they don't know
It has recently been shown that ReLU networks produce arbitrarily overc...
read it

Sparse and Imperceivable Adversarial Attacks
Neural networks have been proven to be vulnerable to a variety of advers...
read it

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
The evaluation of robustness against adversarial manipulation of neural ...
read it

Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
The problem of adversarial samples has been studied extensively for neur...
read it

Provable robustness against all adversarial l_pperturbations for p≥ 1
In recent years several adversarial attacks and defenses have been propo...
read it

Spectral Clustering of Signed Graphs via Matrix Power Means
Signed graphs encode positive (attractive) and negative (repulsive) rela...
read it

Scaling up the randomized gradientfree adversarial attack reveals overestimation of robustness using established attacks
Modern neural networks are highly nonrobust against adversarial manipul...
read it

Why ReLU networks yield highconfidence predictions far away from the training data and how to mitigate the problem
Classifiers used in the wild, in particular for safetycritical systems,...
read it

Disentangling Adversarial Robustness and Generalization
Obtaining deep networks that are robust against adversarial examples and...
read it

A randomized gradientfree attack on ReLU networks
It has recently been shown that neural networks but also other classifie...
read it

Logit Pairing Methods Can Fool GradientBased Attacks
Recently, several logit regularization methods have been proposed in [Ka...
read it

Provable Robustness of ReLU networks via Maximization of Linear Regions
It has been shown that neural network classifiers are not robust. This r...
read it

On the loss landscape of a class of deep neural networks with no bad local valleys
We identify a class of overparameterized deep neural networks with stan...
read it

The Power Mean Laplacian for Multilayer Graph Clustering
Multilayer graphs encode different kind of interactions between the same...
read it

Neural Networks Should Be Wide Enough to Learn Disconnected Decision Regions
In the recent literature the important role of depth in deep learning ha...
read it

Error estimates for spectral convergence of the graph Laplacian on random geometric graphs towards the LaplaceBeltrami operator
We study the convergence of the graph Laplacian of a random geometric gr...
read it

A unifying PerronFrobenius theorem for nonnegative tensors via multihomogeneous maps
Inspired by the definition of symmetric decomposition, we introduce the ...
read it

The loss surface and expressivity of deep convolutional neural networks
We analyze the expressiveness and loss surface of practical deep convolu...
read it

Community detection in networks via nonlinear modularity eigenvectors
Revealing a community structure in a network or dataset is a central pro...
read it

Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
Adaptive gradient methods have become recently very popular, in particul...
read it

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Recent work has shown that stateoftheart classifiers are quite brittl...
read it

The loss surface of deep and wide neural networks
While the optimization problem behind deep neural networks is highly non...
read it

Clustering Signed Networks with the Geometric Mean of Laplacians
Signed networks allow to model positive and negative relationships. We a...
read it

Analysis and Optimization of Loss Functions for Multiclass, Topk, and Multilabel Classification
Topk error is currently a popular performance measure on large scale im...
read it

Globally Optimal Training of Generalized Polynomial Neural Networks with Nonlinear Spectral Methods
The optimization problem behind neural networks is highly nonconvex. Tr...
read it

Latent Embeddings for Zeroshot Classification
We present a novel latent embedding model for learning a compatibility f...
read it

Simple Does It: Weakly Supervised Instance and Semantic Segmentation
Semantic labelling and instance segmentation are two tasks that require ...
read it

Loss Functions for Topk Error: Analysis and Insights
In order to push the performance on realistic computer vision tasks, the...
read it

Topk Multiclass SVM
Class ambiguity is typical in image classification problems with a large...
read it

Efficient Output Kernel Learning for Multiple Tasks
The paradigm of multitask learning is that one can achieve better gener...
read it

An Efficient Multilinear Optimization Framework for Hypergraph Matching
Hypergraph matching has recently become a popular approach for solving c...
read it

Robust PCA: Optimization of the Robust Reconstruction Error over the Stiefel Manifold
It is well known that Principal Component Analysis (PCA) is strongly aff...
read it

Constrained 1Spectral Clustering
An important form of prior information in clustering comes in form of ca...
read it

Tight Continuous Relaxation of the Balanced kCut Problem
Spectral Clustering as a relaxation of the normalized/ratio cut has beco...
read it

A Flexible Tensor Block Coordinate Ascent Scheme for Hypergraph Matching
The estimation of correspondences between two images resp. point sets is...
read it
Matthias Hein
is this you? claim profile
Professor of Mathematics and Computer Science, Faculty of Mathematics and Computer Science, Saarland University