Hyperspectral Image Compression Using Implicit Neural Representation
Hyperspectral images, which record the electromagnetic spectrum for a pixel in the image of a scene, often store hundreds of channels per pixel and contain an order of magnitude more information than a typical similarly-sized color image. Consequently, concomitant with the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images. This paper develops a method for hyperspectral image compression using implicit neural representations where a multilayer perceptron network Φ_θ with sinusoidal activation functions “learns” to map pixel locations to pixel intensities for a given hyperspectral image I. Φ_θ thus acts as a compressed encoding of this image. The original image is reconstructed by evaluating Φ_θ at each pixel location. We have evaluated our method on four benchmarks – Indian Pines, Cuprite, Pavia University, and Jasper Ridge – and we show the proposed method achieves better compression than JPEG, JPEG2000, and PCA-DCT at low bitrates.
READ FULL TEXT