Towards Explainable In-the-Wild Video Quality Assessment: A Database and a Language-Prompted Approach

by   Haoning Wu, et al.

The proliferation of in-the-wild videos has greatly expanded the Video Quality Assessment (VQA) problem. Unlike early definitions that usually focus on limited distortion types, VQA on in-the-wild videos is especially challenging as it could be affected by complicated factors, including various distortions and diverse contents. Though subjective studies have collected overall quality scores for these videos, how the abstract quality scores relate with specific factors is still obscure, hindering VQA methods from more concrete quality evaluations (e.g. sharpness of a video). To solve this problem, we collect over two million opinions on 4,543 in-the-wild videos on 13 dimensions of quality-related factors, including in-capture authentic distortions (e.g. motion blur, noise, flicker), errors introduced by compression and transmission, and higher-level experiences on semantic contents and aesthetic issues (e.g. composition, camera trajectory), to establish the multi-dimensional Maxwell database. Specifically, we ask the subjects to label among a positive, a negative, and a neutral choice for each dimension. These explanation-level opinions allow us to measure the relationships between specific quality factors and abstract subjective quality ratings, and to benchmark different categories of VQA algorithms on each dimension, so as to more comprehensively analyze their strengths and weaknesses. Furthermore, we propose the MaxVQA, a language-prompted VQA approach that modifies vision-language foundation model CLIP to better capture important quality issues as observed in our analyses. The MaxVQA can jointly evaluate various specific quality factors and final quality scores with state-of-the-art accuracy on all dimensions, and superb generalization ability on existing datasets. Code and data available at


page 1

page 4

page 5

page 6

page 8

page 10

page 11


StableVQA: A Deep No-Reference Quality Assessment Model for Video Stability

Video shakiness is an unpleasant distortion of User Generated Content (U...

Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives

The rapid increase in user-generated-content (UGC) videos calls for the ...

MD-VQA: Multi-Dimensional Quality Assessment for UGC Live Videos

User-generated content (UGC) live videos are often bothered by various d...

Towards Robust Text-Prompted Semantic Criterion for In-the-Wild Video Quality Assessment

The proliferation of videos collected during in-the-wild natural setting...

No-Reference Video Quality Assessment using Multi-Level Spatially Pooled Features

Video Quality Assessment (VQA) methods have been designed with a focus o...

StarVQA: Space-Time Attention for Video Quality Assessment

The attention mechanism is blooming in computer vision nowadays. However...

Exploring Opinion-unaware Video Quality Assessment with Semantic Affinity Criterion

Recent learning-based video quality assessment (VQA) algorithms are expe...

Please sign up or login with your details

Forgot password? Click here to reset