SecDD: Efficient and Secure Method for Remotely Training Neural Networks

09/19/2020
by   Ilia Sucholutsky, et al.
1

We leverage what are typically considered the worst qualities of deep learning algorithms - high computational cost, requirement for large data, no explainability, high dependence on hyper-parameter choice, overfitting, and vulnerability to adversarial perturbations - in order to create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.

READ FULL TEXT
research
09/02/2021

Building Compact and Robust Deep Neural Networks with Toeplitz Matrices

Deep neural networks are state-of-the-art in a wide variety of tasks, ho...
research
12/01/2019

A Method for Computing Class-wise Universal Adversarial Perturbations

We present an algorithm for computing class-specific universal adversari...
research
03/14/2022

Adversarial amplitude swap towards robust image classifiers

The vulnerability of convolutional neural networks (CNNs) to image pertu...
research
12/20/2019

secml: A Python Library for Secure and Explainable Machine Learning

We present secml, an open-source Python library for secure and explainab...
research
02/22/2020

Polarizing Front Ends for Robust CNNs

The vulnerability of deep neural networks to small, adversarially design...
research
09/27/2022

Measuring Overfitting in Convolutional Neural Networks using Adversarial Perturbations and Label Noise

Although numerous methods to reduce the overfitting of convolutional neu...
research
05/18/2023

Towards an Accurate and Secure Detector against Adversarial Perturbations

The vulnerability of deep neural networks to adversarial perturbations h...

Please sign up or login with your details

Forgot password? Click here to reset