Scaling shared model governance via model splitting

12/14/2018
by   Miljan Martic, et al.
6

Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation. Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads. As a scalable technique for shared model governance, we propose splitting deep learning model between multiple parties. This paper empirically investigates the security guarantee of this technique, which is introduced as the problem of model completion: Given the entire training data set or an environment simulator, and a subset of the parameters of a trained deep learning model, how much training is required to recover the model's original performance? We define a metric for evaluating the hardness of the model completion problem and study it empirically in both supervised learning on ImageNet and reinforcement learning on Atari and DeepMind Lab. Our experiments show that (1) the model completion problem is harder in reinforcement learning than in supervised learning because of the unavailability of the trained agent's trajectories, and (2) its hardness depends not primarily on the number of parameters of the missing part, but more so on their type and location. Our results suggest that model splitting might be a feasible technique for shared model governance in some settings where training is very expensive.

READ FULL TEXT
research
02/13/2020

Listwise Learning to Rank with Deep Q-Networks

Learning to Rank is the problem involved with ranking a sequence of docu...
research
05/25/2022

An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation

Gradient descent or its variants are popular in training neural networks...
research
09/24/2020

Secure Data Sharing With Flow Model

In the classical multi-party computation setting, multiple parties joint...
research
01/13/2018

Autonomous Driving in Reality with Reinforcement Learning and Image Translation

Supervised learning is widely used in training autonomous driving vehicl...
research
04/20/2019

Compression and Localization in Reinforcement Learning for ATARI Games

Deep neural networks have become commonplace in the domain of reinforcem...
research
12/02/2022

An Information-Theoretic Analysis of Compute-Optimal Neural Scaling Laws

We study the compute-optimal trade-off between model and training data s...
research
12/14/2018

An Empirical Model of Large-Batch Training

In an increasing number of domains it has been demonstrated that deep le...

Please sign up or login with your details

Forgot password? Click here to reset