Multi-Head ReLU Implicit Neural Representation Networks

10/07/2021
by   Arya Aftab, et al.
0

In this paper, a novel multi-head multi-layer perceptron (MLP) structure is presented for implicit neural representation (INR). Since conventional rectified linear unit (ReLU) networks are shown to exhibit spectral bias towards learning low-frequency features of the signal, we aim at mitigating this defect by taking advantage of the local structure of the signals. To be more specific, an MLP is used to capture the global features of the underlying generator function of the desired signal. Then, several heads are utilized to reconstruct disjoint local features of the signal, and to reduce the computational complexity, sparse layers are deployed for attaching heads to the body. Through various experiments, we show that the proposed model does not suffer from the special bias of conventional ReLU networks and has superior generalization capabilities. Finally, simulation results confirm that the proposed multi-head structure outperforms existing INR methods with considerably less computational cost.

READ FULL TEXT

page 2

page 3

page 4

research
02/02/2022

The Role of Linear Layers in Nonlinear Interpolating Networks

This paper explores the implicit bias of overparameterized neural networ...
research
10/10/2018

Random ReLU Features: Universality, Approximation, and Composition

We propose random ReLU features models in this work. Its motivation is r...
research
04/03/2016

Multi-Bias Non-linear Activation in Deep Neural Networks

As a widely used non-linear activation, Rectified Linear Unit (ReLU) sep...
research
03/02/2023

The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks

In this work, we study the implications of the implicit bias of gradient...
research
05/24/2023

Linear Neural Network Layers Promote Learning Single- and Multiple-Index Models

This paper explores the implicit bias of overparameterized neural networ...
research
01/14/2023

Understanding the Spectral Bias of Coordinate Based MLPs Via Training Dynamics

Recently, multi-layer perceptrons (MLPs) with ReLU activations have enab...
research
04/19/2023

Generalization and Estimation Error Bounds for Model-based Neural Networks

Model-based neural networks provide unparalleled performance for various...

Please sign up or login with your details

Forgot password? Click here to reset