计算机科学
自然语言理解
社会化媒体
杠杆(统计)
背景(考古学)
自然语言处理
水准点(测量)
语言模型
人工智能
自然语言
情报检索
万维网
古生物学
大地测量学
生物
地理
作者
Hanzhuo Tan,Chunpu Xu,Jing Li,Yuqun Zhang,Zeyang Fang,Zeyu Chen,Baohua Lai
标识
DOI:10.1109/tnnls.2024.3384987
摘要
Natural language understanding (NLU) is integral to various social media applications. However, the existing NLU models rely heavily on context for semantic learning, resulting in compromised performance when faced with short and noisy social media content. To address this issue, we leverage in-context learning (ICL), wherein language models learn to make inferences by conditioning on a handful of demonstrations to enrich the context and propose a novel hashtag-driven ICL (HICL) framework. Concretely, we pretrain a model, which employs #hashtags (user-annotated topic labels) to drive BERT-based pretraining through contrastive learning. Our objective here is to enable to gain the ability to incorporate topic-related semantic information, which allows it to retrieve topic-related posts to enrich contexts and enhance social media NLU with noisy contexts. To further integrate the retrieved context with the source text, we employ a gradient-based method to identify trigger terms useful in fusing information from both sources. For empirical studies, we collected 45 M tweets to set up an in-context NLU benchmark, and the experimental results on seven downstream tasks show that HICL substantially advances the previous state-of-the-art results. Furthermore, we conducted an extensive analysis and found that the following hold: 1) combining source input with a top-retrieved post from is more effective than using semantically similar posts and 2) trigger words can largely benefit in merging context from the source and retrieved posts.
科研通智能强力驱动
Strongly Powered by AbleSci AI