DeepQTMT: A Deep Learning Approach for Fast QTMT-based CU Partition of Intra-mode VVC
The latest standard Versatile Video Coding (VVC) significantly improves the coding efficiency over its ancestor standard High Efficiency Video Coding (HEVC), but at the expense of sharply increased complexity. In VVC, the quadtree plus multi-type tree (QTMT) structure of coding unit (CU) partition accounts for most of encoding time, due to the brute-force search for recursive rate-distortion (RD) optimization. Instead of the brute-force QTMT search, this paper proposes a deep learning approach to predict the QTMT-based CU partition, for drastically accelerating the encoding process of intra-mode VVC. First, we establish a large-scale database containing sufficient CU partition patterns with diverse video content, which can facilitate the data-driven VVC complexity reduction. Next, we propose a multi-stage exit CNN (MSE-CNN) model with an early exit mechanism to determine the CU partition, in accord with the flexible QTMT structure at multiple stages. Then, we design an adaptive loss function for training the MSE-CNN model, synthesizing both the uncertain number of split modes and the target on minimized RD cost. Finally, a multi-threshold decision scheme is developed, achieving desirable trade-off between complexity and RD performance. Experimental results demonstrate that our approach can reduce the encoding time of VVC by 44.65 bit-rate (BD-BR) of 1.322 state-of-the-art approaches.
READ FULL TEXT