Generalizable Neural Fields as Partially Observed Neural Processes

09/13/2023
by   Jeffrey Gu, et al.
0

Neural fields, which represent signals as a function parameterized by a neural network, are a promising alternative to traditional discrete vector or grid-based representations. Compared to discrete representations, neural representations both scale well with increasing resolution, are continuous, and can be many-times differentiable. However, given a dataset of signals that we would like to represent, having to optimize a separate neural field for each signal is inefficient, and cannot capitalize on shared information or structures among signals. Existing generalization methods view this as a meta-learning problem and employ gradient-based meta-learning to learn an initialization which is then fine-tuned with test-time optimization, or learn hypernetworks to produce the weights of a neural field. We instead propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework, and leverage neural process algorithms to solve this task. We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.

READ FULL TEXT

page 6

page 8

page 12

research
06/17/2020

MetaSDF: Meta-learning Signed Distance Functions

Neural implicit shape representations are an emerging paradigm that offe...
research
10/27/2021

Meta-Learning Sparse Implicit Neural Representations

Implicit neural representations are a promising new avenue of representi...
research
12/03/2020

Learned Initializations for Optimizing Coordinate-Based Neural Representations

Coordinate-based neural representations have shown significant promise a...
research
01/30/2023

Contrastive Meta-Learning for Partially Observable Few-Shot Learning

Many contrastive and meta-learning approaches learn representations by i...
research
11/03/2022

HyperSound: Generating Implicit Neural Representations of Audio Signals with Hypernetworks

Implicit neural representations (INRs) are a rapidly growing research fi...
research
12/22/2022

Reusable Options through Gradient-based Meta Learning

Hierarchical methods in reinforcement learning have the potential to red...
research
02/02/2023

Meta Learning in Decentralized Neural Networks: Towards More General AI

Meta-learning usually refers to a learning algorithm that learns from ot...

Please sign up or login with your details

Forgot password? Click here to reset