计算机科学
人工智能
卷积神经网络
图像分辨率
光场
边距(机器学习)
计算机视觉
模式识别(心理学)
算法
机器学习
作者
Yunlong Wang,Fei Liu,Kunbo Zhang,Guangqi Hou,Zhenan Sun,Tieniu Tan
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2018-09-01
卷期号:27 (9): 4274-4286
被引量:130
标识
DOI:10.1109/tip.2018.2834819
摘要
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
科研通智能强力驱动
Strongly Powered by AbleSci AI