Report Generation on Chest X-rays using Deep Learning
计算机科学
人工智能
深度学习
作者
M. Ashok Kumar,Suresh Ganta,Gopiswara Rao Chinni
标识
DOI:10.1109/iciccs56967.2023.10142637
摘要
Nowadays the chest X-ray is the most frequently used diagnostic procedure. A radiology specialist can understand the detailed information contained in an X-ray of a chest regarding the human body's heart and lungs. Specialists in radiology are typically tasked with reviewing chest X-rays so that patients can get the right treatment. Because a doctor may see more than 100 X-rays each day in larger cities and thorough inspection necessitates skilled doctors, obtaining a thorough medical diagnosis from such Xrays is frequently laborious and time-consuming. This project's goal is to demonstrate deep learning techniques for autonomously extracting clinical data from X-ray images. Deep learning approaches have been combined with algorithms to tackle this difficult challenge, with promising performance. If such reports can be generated automatically by a trained model, a lot of time and effort can be saved. In this project, some deep learning techniques such as an encoder and decoder and a pre-trained chexnet model have been used. The task of obtaining visual features from an x-ray is performed by the chexnet model which is passed to an encoder which then sends its result to a decoder these types of techniques are mainly used in image captioning which aims to produce text from an image Here, LSTM is employed as the encoder while GRU and Bi GRU is used as the decoder. Both of these are recurrent neural networks. The generated text report is evaluated by the BLEU score.