CocoNet: A deep neural network for mapping pixel coordinates to color values

05/29/2018
by   Paul Andrei Bricman, et al.
0

In this paper, we propose a deep neural network approach for mapping the 2D pixel coordinates in an image to the corresponding Red-Green-Blue (RGB) color values. The neural network is termed CocoNet, i.e. COordinates-to-COlor NETwork. During the training process, the neural network learns to encode the input image within its layers. More specifically, the network learns a continuous function that approximates the discrete RGB values sampled over the discrete 2D pixel locations. At test time, given a 2D pixel coordinate, the neural network will output the approximate RGB values of the corresponding pixel. By considering every 2D pixel location, the network can actually reconstruct the entire learned image. It is important to note that we have to train an individual neural network for each input image, i.e. one network encodes a single image only. To the best of our knowledge, we are the first to propose a neural approach for encoding images individually, by learning a mapping from the 2D pixel coordinate space to the RGB color space. Our neural image encoding approach has various low-level image processing applications ranging from image encoding, image compression and image denoising to image resampling and image completion. We conduct experiments that include both quantitative and qualitative results, demonstrating the utility of our approach and its superiority over standard baselines, e.g. bilateral filtering or bicubic interpolation.

READ FULL TEXT

page 8

page 10

research
03/03/2021

COIN: COmpression with Implicit Neural representations

We propose a new simple approach for image compression: instead of stori...
research
02/27/2020

Visual Camera Re-Localization from RGB and RGB-D Images Using DSAC

We describe a learning-based system that estimates the camera position a...
research
09/13/2019

Hierarchical Joint Scene Coordinate Classification and Regression for Visual Localization

Visual localization is pivotal to many applications in computer vision a...
research
09/17/2022

MiNL: Micro-images based Neural Representation for Light Fields

Traditional representations for light fields can be separated into two t...
research
04/13/2023

NeRD: Neural field-based Demosaicking

We introduce NeRD, a new demosaicking method for generating full-color i...
research
10/30/2019

Neural View-Interpolation for Sparse LightField Video

We suggest representing light field (LF) videos as "one-off" neural netw...
research
10/30/2019

Neural View-Interpolation for Sparse Light Field Video

We suggest representing light field (LF) videos as "one-off" neural netw...

Please sign up or login with your details

Forgot password? Click here to reset