计算机科学
嵌入
图像检索
人工智能
兴趣点
服装
点(几何)
情报检索
相似性(几何)
模式识别(心理学)
理论计算机科学
图像(数学)
数学
几何学
考古
历史
作者
Antonio D’Innocente,Nikhil Garg,Yuan Zhang,Loris Bazzani,Michael Donoser
标识
DOI:10.1109/cvprw53098.2021.00435
摘要
Fashion retrieval methods aim at learning a clothing-specific embedding space where images are ranked based on their global visual similarity with a given query. How-ever, global embeddings struggle to capture localized fine-grained similarities between images, because of aggregation operations. Our work deals with this problem by learning localized representations for fashion retrieval based on local interest points of prominent visual features specified by a user. We introduce a localized triplet loss function that compares samples based on corresponding patterns. We incorporate random local perturbation on the interest point as a key regularization technique to enforce local invariance of visual representations. Due to the absence of existing fashion datasets to train on localized representations, we introduce FashionLocalTriplets, a new high-quality dataset annotated by fashion specialists that contains triplets of women's dresses and interest points. The proposed model outperforms state-of-the-art global representations on FashionLocalTriplets.
科研通智能强力驱动
Strongly Powered by AbleSci AI