HVS Revisited: A Comprehensive Video Quality Assessment Framework

10/09/2022
by   Ao-Xiang Zhang, et al.
0

Video quality is a primary concern for video service providers. In recent years, the techniques of video quality assessment (VQA) based on deep convolutional neural networks (CNNs) have been developed rapidly. Although existing works attempt to introduce the knowledge of the human visual system (HVS) into VQA, there still exhibit limitations that prevent the full exploitation of HVS, including an incomplete model by few characteristics and insufficient connections among these characteristics. To overcome these limitations, this paper revisits HVS with five representative characteristics, and further reorganizes their connections. Based on the revisited HVS, a no-reference VQA framework called HVS-5M (NRVQA framework with five modules simulating HVS with five characteristics) is proposed. It works in a domain-fusion design paradigm with advanced network structures. On the side of the spatial domain, the visual saliency module applies SAMNet to obtain a saliency map. And then, the content-dependency and the edge masking modules respectively utilize ConvNeXt to extract the spatial features, which have been attentively weighted by the saliency map for the purpose of highlighting those regions that human beings may be interested in. On the other side of the temporal domain, to supplement the static spatial features, the motion perception module utilizes SlowFast to obtain the dynamic temporal features. Besides, the temporal hysteresis module applies TempHyst to simulate the memory mechanism of human beings, and comprehensively evaluates the quality score according to the fusion features from the spatial and temporal domains. Extensive experiments show that our HVS-5M outperforms the state-of-the-art VQA methods. Ablation studies are further conducted to verify the effectiveness of each module towards the proposed framework.

READ FULL TEXT

page 1

page 3

page 4

page 10

research
04/29/2022

A Deep Learning based No-reference Quality Assessment Model for UGC Videos

Quality assessment for User Generated Content (UGC) videos plays an impo...
research
06/20/2022

DisCoVQA: Temporal Distortion-Content Transformers for Video Quality Assessment

The temporal relationships between frames and their influences on video ...
research
03/28/2022

Visual Mechanisms Inspired Efficient Transformers for Image and Video Quality Assessment

Visual (image, video) quality assessments can be modelled by visual feat...
research
04/13/2023

Zoom-VQA: Patches, Frames and Clips Integration for Video Quality Assessment

Video quality assessment (VQA) aims to simulate the human perception of ...
research
10/30/2019

C3DVQA: Full-Reference Video Quality Assessment with 3D Convolutional Neural Network

Traditional video quality assessment (VQA) methods evaluate localized pi...
research
08/01/2023

Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video Quality Assessment

Video quality assessment (VQA) has attracted growing attention in recent...
research
04/20/2023

Transcoding Quality Prediction for Adaptive Video Streaming

In recent years, video streaming applications have proliferated the dema...

Please sign up or login with your details

Forgot password? Click here to reset