SecDD: Efficient and Secure Method for Remotely Training Neural Networks

09/19/2020
by   Ilia Sucholutsky, et al.
1

We leverage what are typically considered the worst qualities of deep learning algorithms - high computational cost, requirement for large data, no explainability, high dependence on hyper-parameter choice, overfitting, and vulnerability to adversarial perturbations - in order to create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

09/02/2021

Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

Deep neural networks are state-of-the-art in a wide variety of tasks, ho...
12/01/2019

A Method for Computing Class-wise Universal Adversarial Perturbations

We present an algorithm for computing class-specific universal adversari...
03/14/2022

Adversarial amplitude swap towards robust image classifiers

The vulnerability of convolutional neural networks (CNNs) to image pertu...
02/22/2020

Polarizing Front Ends for Robust CNNs

The vulnerability of deep neural networks to small, adversarially design...
02/12/2018

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

High sensitivity of neural networks against malicious perturbations on i...
10/05/2014

On the Computational Efficiency of Training Neural Networks

It is well-known that neural networks are computationally hard to train....
03/12/2020

Hyper-Parameter Optimization: A Review of Algorithms and Applications

Since deep neural networks were developed, they have made huge contribut...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.