计算机科学
嵌入
视觉推理
人工智能
构造(python库)
空格(标点符号)
理解力
对比度(视觉)
自然语言处理
对象(语法)
目标检测
模式识别(心理学)
操作系统
程序设计语言
作者
Bingjie Xu,Yongkang Wong,Junnan Li,Qi Zhao,Mohan Kankanhalli
标识
DOI:10.1109/cvpr.2019.00212
摘要
The recent advances in instance-level detection tasks lay a strong foundation for automated visual scenes understanding. However, the ability to fully comprehend a social scene still eludes us. In this work, we focus on detecting human-object interactions (HOIs) in images, an essential step towards deeper scene understanding. HOI detection aims to localize human and objects, as well as to identify the complex interactions between them. Innate in practical problems with large label space, HOI categories exhibit a long-tail distribution, i.e., there exist some rare categories with very few training samples. Given the key observation that HOIs contain intrinsic semantic regularities despite they are visually diverse, we tackle the challenge of long-tail HOI categories by modeling the underlying regularities among verbs and objects in HOIs as well as general relationships. In particular, we construct a knowledge graph based on the ground-truth annotations of training dataset and external source. In contrast to direct knowledge incorporation, we address the necessity of dynamic image-specific knowledge retrieval by multi-modal learning, which leads to an enhanced semantic embedding space for HOI comprehension. The proposed method shows improved performance on V-COCO and HICO-DET benchmarks, especially when predicting the rare HOI categories.
科研通智能强力驱动
Strongly Powered by AbleSci AI