模式识别(心理学)
帧(网络)
块(置换群论)
修补
深度学习
自编码
对抗制
神经编码
编码器
残余物
作者
Jianping Lin,Dong Liu,Houqiang Li,Feng Wu
出处
期刊:Visual Communications and Image Processing
日期:2018-12-01
卷期号:: 1-4
被引量:10
标识
DOI:10.1109/vcip.2018.8698615
摘要
Motion estimation and motion compensation are fundamental in video coding to remove the temporal redundancy between video frames. The current video coding schemes usually adopt block-based motion estimation and compensation using simple translational or affine motion models, which cannot efficiently characterize complex motions in natural video signal. In this paper, we propose a frame extrapolation method for motion estimation and compensation. Specifically, based on the several previous frames, our method directly extrapolates the current frame using a trained deep network model. The deep network we adopted is a redesigned Video Coding oriented LAplacian Pyramid of Generative Adversarial Networks (VC-LAPGAN). The extrapolated frame is then used as an additional reference frame. Experimental results show that the VC-LAPGAN is capable in estimating and compensating for complex motions, and extrapolating frames with high visual quality. Using the VC-LAPGAN, our method achieves on average 2.0% BD-rate reduction than High Efficiency Video Coding (HEVC) under low-delay P configuration.
科研通智能强力驱动
Strongly Powered by AbleSci AI