计算机科学
对象(语法)
人工智能
基线(sea)
判决
图像(数学)
简单(哲学)
语义学(计算机科学)
动作(物理)
发电机(电路理论)
班级(哲学)
钥匙(锁)
零(语言学)
模式识别(心理学)
弹丸
自然语言处理
机器学习
计算机视觉
有机化学
化学
程序设计语言
功率(物理)
哲学
地质学
物理
认识论
海洋学
量子力学
语言学
计算机安全
作者
Hsuan-An Hsia,Che-Hsien Lin,Bo-Han Kung,Jhao-Ting Chen,Daniel Stanley Tan,Jun-Cheng Chen,Kai‐Lung Hua
标识
DOI:10.1109/icassp43922.2022.9747841
摘要
The key for the contemporary deep learning-based object and action localization algorithms to work is the large-scale annotated data. However, in real-world scenarios, since there are infinite amounts of unlabeled data beyond the categories of publicly available datasets, it is not only time- and manpower-consuming to annotate all the data but also requires a lot of computational resources to train the detectors. To address these issues, we show a simple and reliable baseline that can be easily obtained and work directly for the zero-shot text-guided object and action localization tasks without introducing additional training costs by using Grad-CAM, the widely used class visual saliency map generator, with the help of the recently released Contrastive Language-Image Pre-Training (CLIP) model by OpenAI, which is trained contrastively using the dataset of 400 million image-sentence pairs with rich cross-modal information between text semantics and image appearances. With extensive experiments on the Open Images and HICO-DET datasets, the results demonstrate the effectiveness of the proposed approach for the text-guided unseen object and action localization tasks for images.
科研通智能强力驱动
Strongly Powered by AbleSci AI