自编码
计算机科学
人工智能
图像(数学)
代表(政治)
融合
利用
基础(线性代数)
RGB颜色模型
模式识别(心理学)
图像融合
机器学习
计算机视觉
深度学习
数学
语言学
哲学
几何学
计算机安全
政治
政治学
法学
作者
Fabian Duffhauss,Ngo Anh Vien,Hanna Ziesche,Gerhard Neumann
标识
DOI:10.1007/978-3-031-19842-7_39
摘要
Sensor fusion can significantly improve the performance of many computer vision tasks. However, traditional fusion approaches are either not data-driven and cannot exploit prior knowledge nor find regularities in a given dataset or they are restricted to a single application. We overcome this shortcoming by presenting a novel deep hierarchical variational autoencoder called FusionVAE that can serve as a basis for many fusion tasks. Our approach is able to generate diverse image samples that are conditioned on multiple noisy, occluded, or only partially visible input images. We derive and optimize a variational lower bound for the conditional log-likelihood of FusionVAE. In order to assess the fusion capabilities of our model thoroughly, we created three novel datasets for image fusion based on popular computer vision datasets. In our experiments, we show that FusionVAE learns a representation of aggregated information that is relevant to fusion tasks. The results demonstrate that our approach outperforms traditional methods significantly. Furthermore, we present the advantages and disadvantages of different design choices.
科研通智能强力驱动
Strongly Powered by AbleSci AI