计算机科学
事件(粒子物理)
社会化媒体
背景(考古学)
桥(图论)
可靠性(半导体)
班级(哲学)
语义学(计算机科学)
人工智能
数据科学
情报检索
万维网
内科学
医学
古生物学
功率(物理)
物理
量子力学
生物
程序设计语言
作者
Shubham Gupta,Nandini Saini,Suman Kundu,Debasis Das
出处
期刊:Cornell University - arXiv
日期:2024-01-01
标识
DOI:10.48550/arxiv.2401.06194
摘要
Pervasive use of social media has become the emerging source for real-time information (like images, text, or both) to identify various events. Despite the rapid growth of image and text-based event classification, the state-of-the-art (SOTA) models find it challenging to bridge the semantic gap between features of image and text modalities due to inconsistent encoding. Also, the black-box nature of models fails to explain the model's outcomes for building trust in high-stakes situations such as disasters, pandemic. Additionally, the word limit imposed on social media posts can potentially introduce bias towards specific events. To address these issues, we proposed CrisisKAN, a novel Knowledge-infused and Explainable Multimodal Attention Network that entails images and texts in conjunction with external knowledge from Wikipedia to classify crisis events. To enrich the context-specific understanding of textual information, we integrated Wikipedia knowledge using proposed wiki extraction algorithm. Along with this, a guided cross-attention module is implemented to fill the semantic gap in integrating visual and textual data. In order to ensure reliability, we employ a model-specific approach called Gradient-weighted Class Activation Mapping (Grad-CAM) that provides a robust explanation of the predictions of the proposed model. The comprehensive experiments conducted on the CrisisMMD dataset yield in-depth analysis across various crisis-specific tasks and settings. As a result, CrisisKAN outperforms existing SOTA methodologies and provides a novel view in the domain of explainable multimodal event classification.
科研通智能强力驱动
Strongly Powered by AbleSci AI