素描
模态(人机交互)
计算机科学
草图识别
特征学习
人工智能
算法
手势
手势识别
作者
Cuiqun Chen,Mang Ye,Meibin Qi,Bo Du
标识
DOI:10.1145/3503161.3547993
摘要
Sketch-photo recognition is a cross-modal matching problem whose query sets are sketch images drawn by artists or amateurs. Due to the significant modality difference between the two modalities, it is challenging to extract discriminative modality-shared feature representations. Existing works focus on exploring modality-invariant features to discover shared embedding space. However, they discard modality-specific cues, resulting in information loss and diminished discriminatory power of features. This paper proposes a novel asymmetrical disentanglement and dynamic synthesis learning method in the transformer framework (SketchTrans) to handle modality discrepancy by combining modality-shared information with modality-specific information. Specifically, an asymmetrical disentanglement scheme is introduced to decompose the photo features into sketch-relevant and sketch-irrelevant cues while preserving the original sketch structure. Using the sketch-irrelevant cues, we further translate the sketch modality component to photo representation through knowledge transfer, obtaining cross-modality representations with information symmetry. Moreover, we propose a dynamic updatable auxiliary sketch (A-sketch) modality generated from the photo modality to guide the asymmetrical disentanglement in a single framework. Under a multi-modality joint learning framework, this auxiliary modality increases the diversity of training samples and narrows the cross-modality gap. We conduct extensive experiments on three fine-grained sketch-based retrieval datasets, i.e., PKU-Sketch, QMUL-ChairV2, and QMUL-ShoeV2, outperforming the state-of-the-arts under various metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI