计算机科学
序列(生物学)
水准点(测量)
集合(抽象数据类型)
人工智能
编码(集合论)
排列(音乐)
匹配(统计)
光学(聚焦)
功能(生物学)
相似性(几何)
自然语言处理
机器学习
数学
统计
图像(数学)
光学
物理
生物
进化生物学
遗传学
程序设计语言
地理
声学
大地测量学
出处
期刊:Cornell University - arXiv
日期:2022-10-26
标识
DOI:10.48550/arxiv.2210.14523
摘要
Extreme multi-label text classification (XMTC) is the task of finding the most relevant subset labels from an extremely large-scale label collection. Recently, some deep learning models have achieved state-of-the-art results in XMTC tasks. These models commonly predict scores for all labels by a fully connected layer as the last layer of the model. However, such models can't predict a relatively complete and variable-length label subset for each document, because they select positive labels relevant to the document by a fixed threshold or take top k labels in descending order of scores. A less popular type of deep learning models called sequence-to-sequence (Seq2Seq) focus on predicting variable-length positive labels in sequence style. However, the labels in XMTC tasks are essentially an unordered set rather than an ordered sequence, the default order of labels restrains Seq2Seq models in training. To address this limitation in Seq2Seq, we propose an autoregressive sequence-to-set model for XMTC tasks named OTSeq2Set. Our model generates predictions in student-forcing scheme and is trained by a loss function based on bipartite matching which enables permutation-invariance. Meanwhile, we use the optimal transport distance as a measurement to force the model to focus on the closest labels in semantic label space. Experiments show that OTSeq2Set outperforms other competitive baselines on 4 benchmark datasets. Especially, on the Wikipedia dataset with 31k labels, it outperforms the state-of-the-art Seq2Seq method by 16.34% in micro-F1 score. The code is available at https://github.com/caojie54/OTSeq2Set.
科研通智能强力驱动
Strongly Powered by AbleSci AI