Visible Light-Based Human Visual System Conceptual Model

by   Lee Prangnell, et al.

There is a widely held belief in the digital image and video processing community, which is as follows: the Human Visual System (HVS) is more sensitive to luminance (often confused with brightness) than photon energies (often confused with chromaticity and chrominance). Passages similar to the following occur with high frequency in the peer reviewed literature and academic text books: "the HVS is much more sensitive to brightness than colour" or "the HVS is much more sensitive to luma than chroma". In this discussion paper, a Visible Light-Based Human Visual System (VL-HVS) conceptual model is discussed. The objectives of VL-HVS are as follows: 1. To facilitate a deeper theoretical reflection of the fundamental relationship between visible light, the manifestation of colour perception derived from visible light and the physiology of the perception of colour. That is, in terms of the physics of visible light, photobiology and the human subjective interpretation of visible light, it is appropriate to provide comprehensive background information in relation to the natural interactions between visible light, the retinal photoreceptors and the subsequent cortical processing of such. 2. To provide a more wholesome account with respect to colour information in digital image and video processing applications. 3. To recontextualise colour data in the RGB and YCbCr colour spaces, such that novel techniques in digital image and video processing, including quantisation and artifact reduction techniques, may be developed based on both luma and chroma information (not luma data only).


page 2

page 6


Robust Environment Perception for Automated Driving: A Unified Learning Pipeline for Visual-Infrared Object Detection

The RGB complementary metal-oxidesemiconductor (CMOS) sensor works withi...

Siamese Infrared and Visible Light Fusion Network for RGB-T Tracking

Due to the different photosensitive properties of infrared and visible l...

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision

It is very challenging for various visual tasks such as image fusion, pe...

Identifying Visible Actions in Lifestyle Vlogs

We consider the task of identifying human actions visible in online vide...

I2V-GAN: Unpaired Infrared-to-Visible Video Translation

Human vision is often adversely affected by complex environmental factor...

Invisible Marker: Automatic Annotation for Object Manipulation

We propose invisible marker for accurate automatic annotation to manipul...

Improving Visual Feature Extraction in Glacial Environments

Glacial science could benefit tremendously from autonomous robots, but p...

Please sign up or login with your details

Forgot password? Click here to reset