合理化(经济学)
计算机科学
人工智能
数据挖掘
认识论
哲学
作者
Linan Yue,Qi Liu,Yichao Du,Li Wang,Yanqing An,Enhong Chen
标识
DOI:10.1109/tpami.2025.3592313
摘要
The pursuit of model explainability has prompted the selective rationalization (aka, rationale extraction) which can identify important features (i.e., rationales) from the original input to support prediction results. Existing methods typically involve a cascaded approach with a selector responsible for extracting rationales from the input, followed by a predictor that makes predictions based on the selected rationales. However, these approaches often neglect the information contained in the non-rationales, underutilizing the input. Therefore, in our prior work, we introduce the Disentanglement-Augmented Rationale Extraction (DARE) method, which disentangles the input into rationale and non-rationale components, and enhances rationale representations by minimizing the mutual information between them. While DARE demonstrates strong performance in rationalization, it may still rely on shortcuts in the training distribution, leading to unfaithful rationales. To this end, in this paper, we propose Faith-DARE, an extension of DARE that aims to extract more reliable rationales by mitigating shortcut dependencies. Specifically, we treat the non-rationale features identified by DARE as environments that are decorrelated from the predictions. By shuffling and recombining these environments with rationales, we generate counterfactual samples and identify invariant rationales that remain predictive across shifted distributions. Extensive experiments on graph and textual datasets validate the effectiveness of Faith-DARE. Codes are available at https://github.com/yuelinan/DARE.
科研通智能强力驱动
Strongly Powered by AbleSci AI