Deep Neural Networks are Surprisingly Reversible: A Baseline for Zero-Shot Inversion

07/13/2021
by   Xin Dong, et al.
8

Understanding the behavior and vulnerability of pre-trained deep neural networks (DNNs) can help to improve them. Analysis can be performed via reversing the network's flow to generate inputs from internal representations. Most existing work relies on priors or data-intensive optimization to invert a model, yet struggles to scale to deep architectures and complex datasets. This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal representation. The crux of our method is to inverse the DNN in a divide-and-conquer manner while re-syncing the inverted layers via cycle-consistency guidance with the help of synthesized data. As a result, we obtain a single feed-forward model capable of inversion with a single forward pass without seeing any real data of the original task. With the proposed approach, we scale zero-shot direct inversion to deep architectures and complex datasets. We empirically show that modern classification models on ImageNet can, surprisingly, be inverted, allowing an approximate recovery of the original 224x224px images from a representation after more than 20 layers. Moreover, inversion of generators in GANs unveils latent code of a given synthesized face image at 128x128px, which can even, in turn, improve defective synthesized images from GANs.

READ FULL TEXT

page 2

page 6

page 7

page 8

page 9

page 10

page 17

research
10/22/2020

MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery

To address the issue that deep neural networks (DNNs) are vulnerable to ...
research
12/09/2022

Genie: Show Me the Data for Quantization

Zero-shot quantization is a promising approach for developing lightweigh...
research
12/06/2021

A Generalized Zero-Shot Quantization of Deep Convolutional Neural Networks via Learned Weights Statistics

Quantizing the floating-point weights and activations of deep convolutio...
research
10/26/2022

Zero-Shot Learning of a Conditional Generative Adversarial Network for Data-Free Network Quantization

We propose a novel method for training a conditional generative adversar...
research
06/05/2023

ZIGNeRF: Zero-shot 3D Scene Representation with Invertible Generative Neural Radiance Fields

Generative Neural Radiance Fields (NeRFs) have demonstrated remarkable p...
research
03/01/2023

A Lifted Bregman Formulation for the Inversion of Deep Neural Networks

We propose a novel framework for the regularised inversion of deep neura...
research
08/11/2022

Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation

This paper proposes a simple yet effective method to improve direct (X-t...

Please sign up or login with your details

Forgot password? Click here to reset