Efficient Meta-Learning via Error-based Context Pruning for Implicit Neural Representations

02/01/2023
by   Jihoon Tack, et al.
0

We introduce an efficient optimization-based meta-learning technique for learning large-scale implicit neural representations (INRs). Our main idea is designing an online selection of context points, which can significantly reduce memory requirements for meta-learning in any established setting. By doing so, we expect additional memory savings which allows longer per-signal adaptation horizons (at a given memory budget), leading to better meta-initializations by reducing myopia and, more crucially, enabling learning on high-dimensional signals. To implement such context pruning, our technical novelty is three-fold. First, we propose a selection scheme that adaptively chooses a subset at each adaptation step based on the predictive error, leading to the modeling of the global structure of the signal in early steps and enabling the later steps to capture its high-frequency details. Second, we counteract any possible information loss from context pruning by minimizing the parameter distance to a bootstrapped target model trained on a full context set. Finally, we suggest using the full context set with a gradient scaling scheme at test-time. Our technique is model-agnostic, intuitive, and straightforward to implement, showing significant reconstruction improvements for a wide range of signals. Code is available at https://github.com/jihoontack/ECoP

READ FULL TEXT

page 2

page 5

page 14

page 16

page 17

page 18

page 19

research
10/27/2021

Meta-Learning Sparse Implicit Neural Representations

Implicit neural representations are a promising new avenue of representi...
research
09/14/2019

Torchmeta: A Meta-Learning library for PyTorch

The constant introduction of standardized benchmarks in the literature h...
research
10/08/2018

CAML: Fast Context Adaptation via Meta-Learning

We propose CAML, a meta-learning method for fast adaptation that partiti...
research
06/13/2022

Faster Optimization-Based Meta-Learning Adaptation Phase

Neural networks require a large amount of annotated data to learn. Meta-...
research
03/20/2021

MetaHDR: Model-Agnostic Meta-Learning for HDR Image Reconstruction

Capturing scenes with a high dynamic range is crucial to reproducing ima...
research
06/05/2023

Meta-SAGE: Scale Meta-Learning Scheduled Adaptation with Guided Exploration for Mitigating Scale Shift on Combinatorial Optimization

This paper proposes Meta-SAGE, a novel approach for improving the scalab...
research
01/17/2023

Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes

In this work, we consider the problem of minimizing the sum of Moreau en...

Please sign up or login with your details

Forgot password? Click here to reset