Fast intra coding unit partition decision in H.266/FVC based on spatial features

Ting Lan Lin, Hui Yu Jiang, Jing Ya Huang, Pao Chi Chang

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


With the development of technology, the requirements of hardware equipment and user expectations of visual enjoyment are increasingly gradually. The Joint Video Exploration Team (JVET) has established the latest video compression standard, Future Video Coding (FVC). FVC adopts QuadTree plus Binary Tree (QTBT) based Coding Unit (CU) structure, which not only removes the complex hierarchical structure of the CU, Prediction Unit (PU), and Transform Unit (TU) but also supports square and rectangular coding blocks based on the texture of the video content. Although the QTBT structure can provide superior coding performance, it significantly increases the encoding time, particularly in intra coding. Therefore, developing a fast intra CU partition decision algorithm is essential. In this paper, a fast CU partition decision algorithm in FVC intra coding based on spatial features is proposed. Different spatial features in the pixel domain are proposed in the binary tree and quadtree decision processes. Spatial features for the binary tree are employed for early skipping of the encoding process of CUs with binary tree depth and for early determination of binary tree split mode. Spatial features for the quadtree are employed for early splitting or termination of CUs with quadtree depth. Compared with JEM 5.0, the proposed method can save 23% encoding time on average with a slight increase of 0.62% in the Bjontegaard delta bitrate (BDBR).

Original languageEnglish
Pages (from-to)493-510
Number of pages18
JournalJournal of Real-Time Image Processing
Issue number3
StatePublished - 1 Jun 2020


  • Fast algorithm
  • H.266/future video coding (FVC)
  • Intra coding
  • Quadtree plus binary tree (QTBT)
  • Spatial feature


Dive into the research topics of 'Fast intra coding unit partition decision in H.266/FVC based on spatial features'. Together they form a unique fingerprint.

Cite this