RGB颜色模型
人工智能
计算机科学
保险丝(电气)
计算机视觉
能见度
滤波器(信号处理)
翻译(生物学)
人工神经网络
频道(广播)
融合
模式识别(心理学)
电信
工程类
光学
物理
哲学
电气工程
信使核糖核酸
基因
生物化学
语言学
化学
作者
Longbin Yan,Xiuheng Wang,Min Zhao,Shumin Liu,Jie Chen
标识
DOI:10.1109/vcip49819.2020.9301787
摘要
Near-infrared (NIR) images provide spectral information beyond the visible light spectrum and thus are useful in many applications. However, single-channel NIR images contain less information per pixel than RGB images and lack visibility for human perception. Transforming NIR images to RGB images is necessary for performing further analysis and computer vision tasks. In this work, we propose a novel NIR-to-RGB translation method. It contains two sub-networks and a fusion operator. Specifically, a U-net based neural network is used to learn the texture information while a CycleGAN based neural network is adopted to excavate the color information. Finally, a guided filter based fusion strategy is applied to fuse the outputs of these two neural networks. Experiment results show that our proposed method achieves superior NIR-to-RGB translation performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI