计算机科学
卫星
任务(项目管理)
匹配(统计)
视图合成
特征(语言学)
人工智能
图像(数学)
计算机视觉
深度学习
建筑
情报检索
地理
航空航天工程
哲学
经济
工程类
考古
管理
渲染(计算机图形)
统计
语言学
数学
作者
Aysim Toker,Qunjie Zhou,Maxim Maximov,Laura Leal-Taixé
标识
DOI:10.1109/cvpr46437.2021.00642
摘要
The goal of cross-view image based geo-localization is to determine the location of a given street view image by matching it against a collection of geo-tagged satellite images. This task is notoriously challenging due to the drastic viewpoint and appearance differences between the two domains. We show that we can address this discrepancy explicitly by learning to synthesize realistic street views from satellite inputs. Following this observation, we propose a novel multi-task architecture in which image synthesis and retrieval are considered jointly. The rationale behind this is that we can bias our network to learn latent feature representations that are useful for retrieval if we utilize them to generate images across the two input domains. To the best of our knowledge, ours is the first approach that creates realistic street views from satellite images and localizes the corresponding query street-view simultaneously in an end-to-end manner. In our experiments, we obtain state-of-the-art performance on the CVUSA and CVACT benchmarks. Finally, we show compelling qualitative results for satellite-to-street view synthesis.
科研通智能强力驱动
Strongly Powered by AbleSci AI