Random Weight Factorization Improves the Training of Continuous Neural Representations

10/03/2022
by   Sifan Wang, et al.
0

Continuous neural representations have recently emerged as a powerful and flexible alternative to classical discretized representations of signals. However, training them to capture fine details in multi-scale signals is difficult and computationally expensive. Here we propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers in coordinate-based multi-layer perceptrons (MLPs) that significantly accelerates and improves their training. We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate. This not only helps with mitigating spectral bias, but also allows networks to quickly recover from poor initializations and reach better local minima. We demonstrate how random weight factorization can be leveraged to improve the training of neural representations on a variety of tasks, including image regression, shape representation, computed tomography, inverse rendering, solving partial differential equations, and learning operators between function spaces.

READ FULL TEXT

page 8

page 19

page 21

page 23

page 24

page 26

page 31

page 33

research
12/02/2020

Some observations on partial differential equations in Barron and multi-layer spaces

We use explicit representation formulas to show that solutions to certai...
research
07/21/2022

Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives

Recently, Implicit Neural Representations (INRs) parameterized by neural...
research
06/17/2020

Implicit Neural Representations with Periodic Activation Functions

Implicitly defined, continuous, differentiable signal representations pa...
research
04/09/2023

Variational operator learning: A unified paradigm for training neural operators and solving partial differential equations

Based on the variational method, we propose a novel paradigm that provid...
research
12/03/2020

Learned Initializations for Optimizing Coordinate-Based Neural Representations

Coordinate-based neural representations have shown significant promise a...
research
01/28/2022

CoordX: Accelerating Implicit Neural Representation with a Split MLP Architecture

Implicit neural representations with multi-layer perceptrons (MLPs) have...
research
09/01/2015

A Telescopic Binary Learning Machine for Training Neural Networks

This paper proposes a new algorithm based on multi-scale stochastic loca...

Please sign up or login with your details

Forgot password? Click here to reset