计算机科学
人工智能
领域(数学分析)
机器学习
自然语言处理
数学
数学分析
作者
Sheng Zhang,Yanbo Xu,Naoto Usuyama,Jaspreet Bagga,Robert Tinn,Sam Preston,Rajesh P. N. Rao,Wei Mu,Naveen Valluri,Cliff Wong,Matthew P. Lungren,Tristan Naumann,Hoifung Poon
出处
期刊:Cornell University - arXiv
日期:2023-03-02
被引量:89
标识
DOI:10.48550/arxiv.2303.00915
摘要
Biomedical data is inherently multimodal, comprising physical measurements and natural language narratives. A generalist biomedical AI model needs to simultaneously process different modalities of data, including text and images. Therefore, training an effective generalist biomedical model requires high-quality multimodal data, such as parallel image-text pairs. Here, we present PMC-15M, a novel dataset that is two orders of magnitude larger than existing biomedical multimodal datasets such as MIMIC-CXR, and spans a diverse range of biomedical image types. PMC-15M contains 15 million biomedical image-text pairs collected from 4.4 million scientific articles. Based on PMC-15M, we have pretrained BiomedCLIP, a multimodal foundation model, with domain-specific adaptations tailored to biomedical vision-language processing. We conducted extensive experiments and ablation studies on standard biomedical imaging tasks from retrieval to classification to visual question-answering (VQA). BiomedCLIP achieved new state-of-the-art results in a wide range of standard datasets, substantially outperforming prior approaches. Intriguingly, by large-scale pretraining on diverse biomedical image types, BiomedCLIP even outperforms state-of-the-art radiology-specific models such as BioViL in radiology-specific tasks such as RSNA pneumonia detection. In summary, BiomedCLIP is a fully open-access foundation model that achieves state-of-the-art performance on various biomedical tasks, paving the way for transformative multimodal biomedical discovery and applications. We release our models at https://aka.ms/biomedclip to facilitate future research in multimodal biomedical AI.
科研通智能强力驱动
Strongly Powered by AbleSci AI