BBAND Index: A No-Reference Banding Artifact Predictor

02/27/2020 ∙ by Zhengzhong Tu, et al. ∙ 5

Banding artifact, or false contouring, is a common video compression impairment that tends to appear on large flat regions in encoded videos. These staircase-shaped color bands can be very noticeable in high-definition videos. Here we study this artifact, and propose a new distortion-specific no-reference video quality model for predicting banding artifacts, called the Blind BANding Detector (BBAND index). BBAND is inspired by human visual models. The proposed detector can generate a pixel-wise banding visibility map and output a banding severity score at both the frame and video levels. Experimental results show that our proposed method outperforms state-of-the-art banding detection algorithms and delivers better consistency with subjective evaluations.



There are no comments yet.


page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Banding/false contour remains one of the dominant artifacts that plague the quality of high-definition (HD) videos, especially when viewed on high-resolution or Retina displays. Yet, while significant research effort has been devoted to analyzing various specific compression related artifacts [shahid2014no], such as noise [norkin2018film], blockiness [wang2000blind], ringing [marziliano2004perceptual], and blur [marziliano2002no], less attention has been paid to analyzing banding/false contours. Given the rapidly growing demand for HD/Ultra-HD videos, the need to assess and mitigate banding artifacts is receiving increased attention in both academia and industry.

(a) Original UGC
(b) Transcoded/Re-encoded
Figure 1: Banding artifacts exacerbated by transcoding/re-encoding. (a) shows a frame sampled from an original UGC video with less noticeable “noisy” banding edges, while VP9-encoding exhibits more visible “clean” banding edges, as shown in (b). The lower figures show contrast-enhanced banding regions for better visualization.
Figure 2: Schematic overview of the first portion (Section 2.1-2.3) of the proposed BBAND model. The first row shows the processing flow, while the second row depicts exemplar responses of each processing block.

Banding appears in large, smooth regions with small gradients and presents as discrete, often staircased bands of brightness or color as a result of quantization in video compression. All popular video encoders, including H.264/AVC [wiegand2003overview], VP9 [mukherjee2013latest], and H.265/HEVC [sullivan2012overview] can introduce these artifacts at lower or medium bitrate when coding contents containing smooth areas. Fig. 1 shows an example of banding artifacts exacerbated by transcoding. Traditional quality prediction algorithms such as PSNR, SSIM [wang2004image], and VMAF [li2016toward], however, do not align well with human perception of banding [wang2016perceptual]. The development of a highly reliable banding detector for both original user-generated content (UGC) and transcoded/re-encoded videos, would, therefore, greatly assist streaming platforms in developing measures to avoid banding artifacts in streaming videos.

Related Work. There exists some prior study relating to banding/false contour detection. Some methods [daly2004decontouring, lee2006two, huang2016understanding] take advantage of local features such as the gradient, contrast or entropy to measure potential banding edge statistics. However, methods like these generally do not perform very well when applied to assess the severity of banding edges in video content. Another approach to banding detection is based on pixel segmentation [bhagavathy2009multiscale, baugh2014advanced, wang2016perceptual], where a bottom-up connected component analysis is used to first detect uniform segments, usually followed by a process of banding edge separation. These methods are often sensitive to edge noise, though. We do not include block-based processing, as in [jin2011composite, wang2014multi]

, since it is hard to classify blocks where banding and textures coexist. If post-filtering is applied to these blocks, textures near the banding may become over-smoothed.

Our objective is to design an adaptive blind processor which can detect or enhance both “noisy” banding artifacts that arise in original UGC videos, as well as “clean” banding edges in transcoded videos. In this regard, it could be utilized as a basis for the development of pre-processing and post-processing debanding algorithms. More recent banding detectors like the False Contour Detection and Removal (FCDR) [huang2016understanding] and Wang’s method [wang2016perceptual] are not designed for this practical purpose, and hence it is essential to devote more research to developing other adaptive banding predictors applicable to pre- or post-debanding implementations.

In this paper we propose a new, “completely blind” [mittal2012making] banding model, dubbed the Blind BANding Artifact Detector (BBAND index), by leveraging edge detection and a human visual model. The proposed method operates on individual frames to obtain a pixel-wise banding visibility map. It can also produce no-reference perceptual quality predictions of videos with banding artifacts. Details of our proposed banding detector are given in Section 2, while evaluation results are given in Section 3. Finally, Section 4 concludes the paper.

2 Proposed Banding Detector

A block diagram of the first portion of the proposed model, which generates a pixel-wise banding visibility map (BVM), is illustrated in Fig. 2. Based on our observation that banding artifact appears as weak edges with small gradient (whether “clean” or “noisy”), we build our banding detector (BBAND), by exploiting existing edge detection techniques as well as certain visual properties. A spatio-temporal visual importance pooling is then applied to the BVM, as shown in Fig. 3, yielding “completely blind” banding scores for both individual frames and the entire video.

2.1 Pre-processing

We have observed that re-encoding videos at bitrates optimized for streaming often exacerbates banding in videos that already exhibit slight banding artifacts that may be barely visible, as shown in Fig. 1444The exemplary frames used in this paper are from Music2Brain (YouTube channel). Website: Used with permission.. We thereby deployed self-guided filtering [he2012guided], which is an effective edge-preserving smoothing process, to enhance banding edges. We deemed the guided to be a better choice than the bilateral filter [tomasi1998bilateral], since it better preserves gradient profile, which is a vital local feature used in our proposed framework. Image gradients are then calculated by applying a Sobel operator after pre-smoothing, yielding a gradient feature map.

2.2 Banding Edge Extraction

Inspired by the efficacy of using the Canny edge detector [canny1986computational] to improve ringing region detection [liu2010perceptually], we performed a similar procedure to extract banding edges. After pre-filtering, the pixels are classified into three classes depending on their Sobel gradient profiles: pixels having Sobel gradient magnitudes less than are labeled as flat pixels; pixels with gradient magnitudes exceeding are marked as textures. The remaining pixels are regarded as candidate banding pixels (CBP), on which the following steps are implemented to create a banding edge map (BEM). (We used ).

  1. [label= 0.,ref=Step 0, leftmargin=*,topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]

  2. Uniformity Check: Only the CBPs whose neighbors are either flat pixels or CBPs are retained for further processing.

  3. Edge Thinning: Non-maxima suppression [canny1986computational] is applied to each remaining CBP along its Sobel gradient orientation to better localize the potential bands.

  4. Gap Filling: If two candidate pixels are disjoint, but able to be overlapped by a binary circular blob, the gap between the two points is filled by a proper banding edge.

  5. Edge Linking: All connected CBPs are linked together in lists of sequential edge points. Each edge is either a curved line or a loop.

  6. Noise Removal: Linked edges shorter than a certain threshold are discarded as visually insignificant.

  7. Edge Labeling: The resulting connected banding edges are labeled separately, defining the ultimate BEM.

The colored edge map in Fig. 2 shows a BEM extracted from an input frame. The banding edges are well localized.

2.3 Banding Visibility Estimation

Staircase-like banding artifacts appear similar to Mach Bands (Chevreul illusion), where perceived edge contrast is exaggerated by edge enhancement by the early visual system [wiki:machbands]. Explanations of the illusion usually involve the center-surround excitatory-inhibitory pooling responses of retinal receptive fields [ratliff1965mach]. Inspired by the psychovisual findings in [ross1989conditions]

, we developed a local banding visibility estimator based on edge contrast and perceptual masking effects. The estimator processes the BEM and yields an element-wise banding visibility map (BVM).

2.3.1 Basic Edge Feature

Banding artifact presents as visible edges. As described earlier, we use the Sobel gradient magnitude as an edge visibility feature. Since edge visibility is also affected by content, we also model visual masking as it may affect the subjective perception of banding.

2.3.2 Visual Masking

Visual masking is a phenomenon whereby the visibility of a visual stimulus (target) is reduced by the presence of another stimulus, called a mask. Well-known masking effects include luminance and texture masking [liu2010perceptually, chen2016perceptual]. Here we deploy a simple but effective quantitative model of the effect of masking on banding edge visibility.

Figure 3: Flowchart of the second portion (Section 2.4) of the proposed BBAND model, which produces banding scores on both frames and whole videos.

Local Statistics

: At each detected banding pixel in the BEM, compute local Gaussian-weighted mean and standard deviation (“sigma field”) on the original un-preprocessed frame:


where are spatial indices at detected pixels in the BEM with corresponding original pixel intensity , and is a 2D isotropic Gaussian weighting function. We use the and feature maps to estimate the local background luminance and complexity. The window size in our experiments was set as .

Luminance Masking: We define a luminance visibility transfer function () to express luminance masking as a function of the local background intensity. We have observed that banding artifacts remain visible even in very dark areas, so we only model the masking at very bright pixels. A final luminance masking weight is computed at each pixel as


where is calculated using (1). is a pair of constants chosen to adjust the shape of the transfer function. We used in our implementations.

Texture Masking: We also define a texture visibility transfer function () to capture the effects of texture masking. The is defined to be inversely proportional to local image activity [liu2010perceptually] when an activity measure (mean “sigma field”) rises above threshold . The overall weighting function is formulated as




where is given by Eq. (2), and is a parameter that is used to tune the nonlinearity of . The values of were adopted after careful inspection.

Cardinality Masking: The authors of [wang2016perceptual] have shown that edge length is another useful banding visibility feature in a subjective study. We accordingly define the following transfer function which weights banding visibility by edge cardinality:


where is the set of banding edges passing through location , and is a threshold on minimal noticeable edge length, above which banding edge visibility is positively correlated to normalized edge length. and denote the image height and width, respectively. We used parameters in our experiments.

(a) Baugh [baugh2014advanced]
(b) Wang [wang2016perceptual]
(c) BBAND (proposed)
Figure 4: Scatter plots and regression curves of (a) Baugh [baugh2014advanced], (b) Wang [wang2016perceptual], (c) BBAND, versus MOS on banding dataset [wang2016perceptual].

2.3.3 Visibility Integration

The overall visibility of an artifact depends on the visual response to it modulated by a concurrency of masking effects. Here we use a simple but effective product model of feature integration at each computed banding pixel to obtain the banding visibility map:


where ’s are the responsive weighting parameters that scale the measured edge strength (Sobel gradient magnitude) at location .

2.4 Making a Banding Metric

Previous authors [chen2016perceptual, ghadiyaram2017no, moorthy2009visual, park2012video] have studied the benefits of integrating visual importance pooling into objective quality model, generally aligning with the idea that the overall perceived quality of a video is dominated by those regions having the poorest quality. In our model, we apply the worst percentile pooling to obtain an average banding score from the extracted BVM, where is employed in experiments.

Baugh [baugh2014advanced] 0.7739 0.6304 0.8037 9.7671
Wang [wang2016perceptual] 0.8689 0.6788 0.8770 7.8863
BBAND 0.9330 0.8116 0.9578 4.7173
Table 1: Performance comparison of blind banding models.

Banding usually occurs in non-salient regions (e.g., background) while salient objects catch more of the viewer’s attention. We thereby use the well-known spatial information (SI) and temporal information (TI) to indicate possible spatial and temporal distractors against banding visibility. SI is computed as the standard deviation of the pixel-wise gradient magnitude, while TI as the standard deviation of the absolute frame differences on each frame [itu1999subjective]. These are then mapped by an exponential transfer function to obtain weights:


Finally, we construct the frame-level BBAND index by applying visual percentile pooling and weights to BVM:


where is the index set of the largest percentile non-zero pixel-wise visibility values contained in the BVM of frame . We also obtain the video-level BBAND metric by averaging all frame-level banding scores, weighted by per-frame TI, respectively:


Fig. 3 shows the entire workflow of the BBAND indices.

3 Subjective Evaluation

Other implemented parameters in our proposed BBAND model are , respectively, after empirical calibration, and we’ve found these selected parameters generally perform well in most cases. We evaluated the BBAND model against two recent banding metrics, Wang [wang2016perceptual] and Baugh [baugh2014advanced], on the only existing banding dataset, created by Wang et al. [wang2016perceptual]. It consists of six clips of 720p@30fps videos with different levels of quantization using VP9. The Spearman rank-order correlation coefficient (SRCC) and Kendall rank-order correlation coefficient (KRCC) between predicted scores and mean opinion scores (MOS) of subjects are directly reported for the evaluated methods. We also calculated the Pearson linear correlation coefficient (PLCC) and the corresponding root mean squared error (RMSE) after fitting a logistic function between MOS and predicted values [sheikh2006statistical]. Table 1 summarizes the experimental results, and Fig. 4 plots the fitted logistic curves of MOS versus the evaluated banding models. These results have shown that the proposed BBAND metric yields highly promising performance regarding subjective consistency.

4 Conclusion and Future Work

We have presented a new no-reference video quality model called the BBAND for assessing perceived banding artifacts in high-quality or high-definition videos. The algorithm involves robust detection of banding edges, a perception-inspired estimator of banding visibility, and a model of spatial-temporal visual importance pooling. Subjective evaluation shows that our proposed method correlates favorably with human perception as compared to several existing banding metrics. As a “completely blind” (opinion-unaware) distortion-specific quality indicator, BBAND can be incorporated with other video quality measures as a tool to optimize user-generated video processing pipelines for media streaming platforms. Future work will include further improvements of BBAND by integrating with more temporal cues, and its applications to address such banding artifacts via debanding pre-processing or post-filtering.

5 References