情态动词
模式
跳跃式监视
计算机科学
能见度
人工智能
采样(信号处理)
回归
模式识别(心理学)
计算机视觉
机器学习
数据挖掘
统计
数学
滤波器(信号处理)
社会学
社会科学
物理
高分子化学
化学
光学
作者
Napat Wanchaitanawong,Masayuki Tanaka,Takashi Shibata,Masatoshi Okutomi
标识
DOI:10.23919/mva51890.2021.9511366
摘要
The combined use of multiple modalities enables accurate pedestrian detection under poor lighting conditions by using the high visibility areas from these modalities together. The vital assumption for the combination use is that there is no or only a weak misalignment between the two modalities. In general, however, this assumption often breaks in actual situations. Due to this assumption's breakdown, the position of the bounding boxes does not match between the two modalities, resulting in a significant decrease in detection accuracy, especially in regions where the amount of misalignment is large. In this paper, we propose a multimodal Faster-RCNN that is robust against large misalignment. The keys are 1) modal-wise regression and 2) multi-modal IoU for mini-batch sampling. To deal with large misalignment, we perform bounding box regression for both the RPN and detection-head with both modalities. We also propose a new sampling strategy called "multi-modal mini-batch sampling" that integrates the IoU for both modalities. We demonstrate that the proposed method's performance is much better than that of the state-of-the-art methods for data with large misalignment through actual image experiments.
科研通智能强力驱动
Strongly Powered by AbleSci AI