Deep Video compression with Reduced Computational Complexity

University of Bristol

About

It has recently been demonstrated that spatial resolution adaptation can be integrated within video compression to improve overall coding performance by spatially down-sampling before encoding and super-resolving at the decoder. Significant improvements have been reported when convolutional neural networks (CNNs) were used to perform the resolution up-sampling. However, this approach suffers from high complexity at the decoder due to the employment of CNN-based super-resolution. In this project, a novel framework is proposed which supports the flexible allocation of complexity between the encoder and decoder. This approach employs a CNN model for video down-sampling at the encoder and uses a Lanczos3 filter to reconstruct full resolution at the decoder.


Performance

The proposed method was integrated into the HEVC HM 16.20 software and evaluated on JVET UHD test sequences using the All Intra configuration. The experimental results demonstrate the potential of the proposed approach, with significant bitrate savings (more than 10%) over the original HEVC HM, coupled with reduced computational complexity at both encoder (29%) and decoder (10%).

Citation

@inproceedings{ma2020video,
  title={Video compression with low complexity CNN-based spatial resolution adaptation},
  author={Ma, Di and Zhang, Fan and Bull, David R},
  booktitle={Applications of Digital Image Processing XLIII},
  volume={11510},
  pages={115100D},
  year={2020},
  organization={International Society for Optics and Photonics}
}[paper]