计算机科学
人工智能
变压器
代表(政治)
工程类
政治学
政治
电气工程
电压
法学
作者
Hong-Yu Zhou,Yizhou Yu,Chengdi Wang,Shu Zhang,Yuanxu Gao,Jia Pan,Jun Shao,Guangming Lu,Kang Zhang,Weimin Li
标识
DOI:10.1038/s41551-023-01045-x
摘要
During the diagnostic process, clinicians leverage multimodal information, such as the chief complaint, medical images and laboratory test results. Deep-learning models for aiding diagnosis have yet to meet this requirement of leveraging multimodal information. Here we report a transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner. Rather than learning modality-specific features, the model leverages embedding layers to convert images and unstructured and structured text into visual tokens and text tokens, and uses bidirectional blocks with intramodal and intermodal attention to learn holistic representations of radiographs, the unstructured chief complaint and clinical history, and structured clinical information such as laboratory test results and patient demographic information. The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary disease (by 12% and 9%, respectively) and in the prediction of adverse clinical outcomes in patients with COVID-19 (by 29% and 7%, respectively). Unified multimodal transformer-based models may help streamline the triaging of patients and facilitate the clinical decision-making process.
科研通智能强力驱动
Strongly Powered by AbleSci AI