DeepAI AI Chat
Log In Sign Up

Defensive Tensorization

by   Adrian Bulat, et al.

We propose defensive tensorization, an adversarial defence technique that leverages a latent high-order factorization of the network. The layers of a network are first expressed as factorized tensor layers. Tensor dropout is then applied in the latent subspace, therefore resulting in dense reconstructed weights, without the sparsity or perturbations typically induced by the randomization.Our approach can be readily integrated with any arbitrary neural architecture and combined with techniques like adversarial training. We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks. We validate the versatility of our approach across domains and low-precision architectures by considering an audio classification task and binary networks. In all cases, we demonstrate improved performance compared to prior works.


Matrix and tensor decompositions for training binary neural networks

This paper is on improving the training of binary neural networks in whi...

Stochastically Rank-Regularized Tensor Regression Networks

Over-parametrization of deep neural networks has recently been shown to ...

A General Model for Robust Tensor Factorization with Unknown Noise

Because of the limitations of matrix factorization, such as losing spati...

Efficient N-Dimensional Convolutions via Higher-Order Factorization

With the unprecedented success of deep convolutional neural networks cam...

Learning Tensor Latent Features

We study the problem of learning latent feature models (LFMs) for tensor...

A Main/Subsidiary Network Framework for Simplifying Binary Neural Network

To reduce memory footprint and run-time latency, techniques such as neur...