Gradient Origin Networks

07/07/2020
by   Chris G. Willcocks, et al.
1

This paper proposes a new type of implicit generative model that is able to quickly learn a latent representation without an explicit encoder. This is achieved with an implicit neural network that takes as inputs points in the coordinate space alongside a latent vector initialised with zeros. The gradients of the data fitting loss with respect to this zero vector are jointly optimised to act as latent points that capture the data manifold. The results show similar characteristics to autoencoders, but with fewer parameters and the advantages of implicit representation networks.

READ FULL TEXT
research
03/12/2018

Learning the Base Distribution in Implicit Generative Models

Popular generative model learning methods such as Generative Adversarial...
research
07/25/2019

Y-Autoencoders: disentangling latent representations via sequential-encoding

In the last few years there have been important advancements in generati...
research
09/22/2022

Edge-oriented Implicit Neural Representation with Channel Tuning

Implicit neural representation, which expresses an image as a continuous...
research
05/03/2023

Shap-E: Generating Conditional 3D Implicit Functions

We present Shap-E, a conditional generative model for 3D assets. Unlike ...
research
02/15/2020

Manifold-based Test Generation for Image Classifiers

Neural networks used for image classification tasks in critical applicat...
research
09/18/2023

Latent assimilation with implicit neural representations for unknown dynamics

Data assimilation is crucial in a wide range of applications, but it oft...
research
07/16/2023

Neural Stream Functions

We present a neural network approach to compute stream functions, which ...

Please sign up or login with your details

Forgot password? Click here to reset