MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance Fields

12/16/2022
by   JaeYoung Chung, et al.
0

Hinged on the representation power of neural networks, neural radiance fields (NeRF) have recently emerged as one of the promising and widely applicable methods for 3D object and scene representation. However, NeRF faces challenges in practical applications, such as large-scale scenes and edge devices with a limited amount of memory, where data needs to be processed sequentially. Under such incremental learning scenarios, neural networks are known to suffer catastrophic forgetting: easily forgetting previously seen data after training with new data. We observe that previous incremental learning algorithms are limited by either low performance or memory scalability issues. As such, we develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF). MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries. Upon the motivation, our framework learns which rays to query NeRF to extract previous pixel values. The extracted pixel values are then used to train NeRF in a self-distillation manner to prevent catastrophic forgetting. As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.

READ FULL TEXT

page 1

page 7

page 9

page 14

page 15

research
10/10/2022

A Memory Transformer Network for Incremental Learning

We study class-incremental learning, a training setup in which new class...
research
04/11/2023

Density Map Distillation for Incremental Object Counting

We investigate the problem of incremental learning for object counting, ...
research
02/18/2023

On Handling Catastrophic Forgetting for Incremental Learning of Human Physical Activity on the Edge

Human activity recognition (HAR) has been a classic research problem. In...
research
07/03/2022

Memory-Based Label-Text Tuning for Few-Shot Class-Incremental Learning

Few-shot class-incremental learning(FSCIL) focuses on designing learning...
research
01/07/2020

Intrinsic Motivation and Episodic Memories for Robot Exploration of High-Dimensional Sensory Spaces

This work presents an architecture that generates curiosity-driven goal-...
research
04/29/2020

Neural Network Retraining for Model Serving

We propose incremental (re)training of a neural network model to cope wi...
research
03/30/2022

Rainbow Keywords: Efficient Incremental Learning for Online Spoken Keyword Spotting

Catastrophic forgetting is a thorny challenge when updating keyword spot...

Please sign up or login with your details

Forgot password? Click here to reset