底漆(化妆品)
医学影像学
计算机科学
自然语言处理
医学物理学
数据科学
人工智能
医学
化学
有机化学
作者
Tyler Bradshaw,Xin Tie,Joshua Warner,Junjie Hu,Quanzheng Li,Xiang Li
标识
DOI:10.2967/jnumed.124.268072
摘要
Large language models (LLMs) are poised to have a disruptive impact on health care. Numerous studies have demonstrated promising applications of LLMs in medical imaging, and this number will grow as LLMs further evolve into large multimodal models (LMMs) capable of processing both text and images. Given the substantial roles that LLMs and LMMs will have in health care, it is important for physicians to understand the underlying principles of these technologies so they can use them more effectively and responsibly and help guide their development. This article explains the key concepts behind the development and application of LLMs, including token embeddings, transformer networks, self-supervised pretraining, fine-tuning, and others. It also describes the technical process of creating LMMs and discusses use cases for both LLMs and LMMs in medical imaging.
科研通智能强力驱动
Strongly Powered by AbleSci AI