因果推理
虚假关系
混淆
结果(博弈论)
调解
代理(统计)
因果结构
因果模型
观察研究
计量经济学
计算机科学
边际结构模型
推论
因果分析
因果关系(物理学)
人工智能
机器学习
统计
数学
物理
数理经济学
量子力学
政治学
法学
作者
Lu Cheng,Ruocheng Guo,Huan Liu
标识
DOI:10.1145/3488560.3498407
摘要
An important problem in causal inference is to break down the total effect of a treatment on an outcome into different causal pathways and to quantify the causal effect in each pathway. For instance, in causal fairness, the total effect of being a male employee (i.e., treatment) constitutes its direct effect on annual income (i.e., outcome) and the indirect effect via the employee's occupation (i.e., mediator). Causal mediation analysis (CMA) is a formal statistical framework commonly used to reveal such underlying causal mechanisms. One major challenge of CMA in observational studies is handling confounders, variables that cause spurious causal relationships among treatment, mediator, and outcome. Conventional methods assume sequential ignorability that implies all confounders can be measured, which is often unverifiable in practice. This work aims to circumvent the stringent sequential ignorability assumptions and consider hidden confounders. Drawing upon proxy strategies and recent advances in deep learning, we propose to simultaneously uncover the latent variables that characterize hidden confounders and estimate the causal effects. Empirical evaluations using both synthetic and semi-synthetic datasets validate the effectiveness of the proposed method. We further show the potentials of our approach for causal fairness analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI