计算机科学
解码方法
熵编码
人工智能
编解码器
编码器
条件熵
数据压缩
熵(时间箭头)
算术编码
图像压缩
模式识别(心理学)
算法
上下文自适应二进制算术编码
图像处理
图像(数学)
最大熵原理
物理
操作系统
量子力学
计算机硬件
作者
Haisheng Fu,Feng Liang,Jie Liang,Zhenman Fang,Guohe Zhang,Jingning Han
标识
DOI:10.1109/dcc58796.2024.00025
摘要
Recent advancements in deep learning-based image compression are notable. However, prevalent schemes that employ a serial context-adaptive entropy model to enhance rate-distortion (R-D) performance are markedly slow. Furthermore, the complexities of the encoding and decoding networks are substantially high, rendering them unsuitable for some practical applications. In this paper, we propose two techniques to balance the trade-off between complexity and performance. First, we introduce two branching coding networks to independently learn a low-resolution latent representation and a high-resolution latent representation of the input image, discriminatively representing the global and local information therein. Second, we utilize the high-resolution latent representation as conditional information for the low-resolution latent representation, furnishing it with global information, thus aiding in the reduction of redundancy between low-resolution information. We do not utilize any serial entropy models. Instead, we employ a parallel channel-wise auto-regressive entropy model for encoding and decoding low-resolution and high-resolution latent representations. Experiments demonstrate that our method is approximately twice as fast in both encoding and decoding compared to the parallelizable checkerboard context model, and it also achieves a 1.2% improvement in R-D performance compared to state-of-the-art learned image compression schemes. Our method also outperforms classical image codecs including H.266/VVC-intra (4:4:4) and some recent learned methods in rate-distortion performance, as validated by both PSNR and MS-SSIM metrics on the Kodak dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI