医学
验光服务
答疑
眼科
情报检索
万维网
计算机科学
作者
Xiaolan Chen,Ruoyu Chen,Pusheng Xu,X. Wan,Weiyi Zhang,Bingjie Yan,Xianwen Shang,Mingguang He,Danli Shi
标识
DOI:10.1136/bjo-2024-326097
摘要
Ophthalmic practice involves the integration of diverse clinical data and interactive decision-making, posing challenges for traditional artificial intelligence (AI) systems. Visual question answering (VQA) addresses this by combining computer vision and natural language processing to interpret medical images through user-driven queries. Evolving from VQA, multimodal AI agents enable continuous dialogue, tool use and context-aware clinical decision support. This review explores recent developments in ophthalmic conversational AI, spanning theoretical advances and practical implementations. We highlight the transformative role of large language models (LLMs) in improving reasoning, adaptability and task execution. However, key obstacles remain, including limited multimodal datasets, absence of standardised evaluation protocols, and challenges in clinical integration. We outline these limitations and propose future research directions to support the development of robust, LLM-driven AI systems. Realising their full potential will depend on close collaboration between AI researchers and the ophthalmic community.
科研通智能强力驱动
Strongly Powered by AbleSci AI