多样性(政治)
强化学习
钢筋
质量(理念)
计算机科学
医学
医学物理学
人工智能
心理学
物理
社会心理学
社会学
人类学
量子力学
作者
Daniel Parres,Alberto Albiol,Roberto Paredes
出处
期刊:Bioengineering
[Multidisciplinary Digital Publishing Institute]
日期:2024-04-03
卷期号:11 (4): 351-351
被引量:6
标识
DOI:10.3390/bioengineering11040351
摘要
Deep learning is revolutionizing radiology report generation (RRG) with the adoption of vision encoder–decoder (VED) frameworks, which transform radiographs into detailed medical reports. Traditional methods, however, often generate reports of limited diversity and struggle with generalization. Our research introduces reinforcement learning and text augmentation to tackle these issues, significantly improving report quality and variability. By employing RadGraph as a reward metric and innovating in text augmentation, we surpass existing benchmarks like BLEU4, ROUGE-L, F1CheXbert, and RadGraph, setting new standards for report accuracy and diversity on MIMIC-CXR and Open-i datasets. Our VED model achieves F1-scores of 66.2 for CheXbert and 37.8 for RadGraph on the MIMIC-CXR dataset, and 54.7 and 45.6, respectively, on Open-i. These outcomes represent a significant breakthrough in the RRG field. The findings and implementation of the proposed approach, aimed at enhancing diagnostic precision and radiological interpretations in clinical settings, are publicly available on GitHub to encourage further advancements in the field.
科研通智能强力驱动
Strongly Powered by AbleSci AI