虚假关系
计算机科学
认知心理学
因果模型
一般化
人工智能
混淆
干预(咨询)
机器学习
心理学
统计
数学
精神科
数学分析
作者
Xu Yang,Hanwang Zhang,Guo-Jun Qi,Jianfei Cai
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:1
标识
DOI:10.48550/arxiv.2103.03493
摘要
We present a novel attention mechanism: Causal Attention (CATT), to remove the ever-elusive confounding effect in existing attention-based vision-language models. This effect causes harmful bias that misleads the attention module to focus on the spurious correlations in training data, damaging the model generalization. As the confounder is unobserved in general, we use the front-door adjustment to realize the causal intervention, which does not require any knowledge on the confounder. Specifically, CATT is implemented as a combination of 1) In-Sample Attention (IS-ATT) and 2) Cross-Sample Attention (CS-ATT), where the latter forcibly brings other samples into every IS-ATT, mimicking the causal intervention. CATT abides by the Q-K-V convention and hence can replace any attention module such as top-down attention and self-attention in Transformers. CATT improves various popular attention-based vision-language models by considerable margins. In particular, we show that CATT has great potential in large-scale pre-training, e.g., it can promote the lighter LXMERT~\cite{tan2019lxmert}, which uses fewer data and less computational power, comparable to the heavier UNITER~\cite{chen2020uniter}. Code is published in \url{https://github.com/yangxuntu/catt}.
科研通智能强力驱动
Strongly Powered by AbleSci AI