计算机科学
模态(人机交互)
人工智能
工作量
语义学(计算机科学)
2019年冠状病毒病(COVID-19)
考试(生物学)
自然语言处理
机器学习
语言模型
卷积神经网络
深度学习
疾病
医学
病理
古生物学
传染病(医学专业)
生物
程序设计语言
操作系统
作者
Fenglin Liu,Tingting Zhu,Xian Wang,Bohan Yang,Chenyu You,Chenyang Wang,Lei Lu,Zhangdaihong Liu,Yefeng Zheng,Xingming Sun,Yang Yang,Lei Clifton,David A. Clifton
标识
DOI:10.1038/s41746-023-00952-2
摘要
Deep neural networks have been integrated into the whole clinical decision procedure which can improve the efficiency of diagnosis and alleviate the heavy workload of physicians. Since most neural networks are supervised, their performance heavily depends on the volume and quality of available labels. However, few such labels exist for rare diseases (e.g., new pandemics). Here we report a medical multimodal large language model (Med-MLLM) for radiograph representation learning, which can learn broad medical knowledge (e.g., image understanding, text semantics, and clinical phenotypes) from unlabelled data. As a result, when encountering a rare disease, our Med-MLLM can be rapidly deployed and easily adapted to them with limited labels. Furthermore, our model supports medical data across visual modality (e.g., chest X-ray and CT) and textual modality (e.g., medical report and free-text clinical note); therefore, it can be used for clinical tasks that involve both visual and textual data. We demonstrate the effectiveness of our Med-MLLM by showing how it would perform using the COVID-19 pandemic "in replay". In the retrospective setting, we test the model on the early COVID-19 datasets; and in the prospective setting, we test the model on the new variant COVID-19-Omicron. The experiments are conducted on 1) three kinds of input data; 2) three kinds of downstream tasks, including disease reporting, diagnosis, and prognosis; 3) five COVID-19 datasets; and 4) three different languages, including English, Chinese, and Spanish. All experiments show that our model can make accurate and robust COVID-19 decision-support with little labelled data.
科研通智能强力驱动
Strongly Powered by AbleSci AI