医疗保健
工作流程
软件部署
背景(考古学)
人工智能
计算机科学
透视图(图形)
医疗保健系统
数据科学
不平等
机器学习
数学
经济
经济增长
古生物学
数学分析
操作系统
生物
数据库
作者
Richard J. Chen,Tiffany Chen,Jana Lipková,Judy J. Wang,Drew F. K. Williamson,Ming Lu,Sharifa Sahai,Faisal Mahmood
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:15
标识
DOI:10.48550/arxiv.2110.00603
摘要
In the current development and deployment of many artificial intelligence (AI) systems in healthcare, algorithm fairness is a challenging problem in delivering equitable care. Recent evaluation of AI models stratified across race sub-populations have revealed inequalities in how patients are diagnosed, given treatments, and billed for healthcare costs. In this perspective article, we summarize the intersectional field of fairness in machine learning through the context of current issues in healthcare, outline how algorithmic biases (e.g. - image acquisition, genetic variation, intra-observer labeling variability) arise in current clinical workflows and their resulting healthcare disparities. Lastly, we also review emerging technology for mitigating bias via federated learning, disentanglement, and model explainability, and their role in AI-SaMD development.
科研通智能强力驱动
Strongly Powered by AbleSci AI