计算机科学
情态动词
传感器融合
人工智能
材料科学
高分子化学
作者
Sachin Kumar,Pradeep Kumar Mallick,Olga Vorfolomeyeva
标识
DOI:10.1109/esic60604.2024.10481569
摘要
High-quality X-rays are now available to diagnose lung diseases with the help of radiologists. However, the diagnostic process is time consuming and depends on specialist availability in medical institutions. Patient information may include chest X-rays of varying quality, medical test results, doctors' notes and prescriptions, and medication details, among others. In this study, we present a model for classifying pulmonary diseases using multimodal data from patient clinical studies and radiographic images. Various methods were used to generate artificial samples for both images and tabular data on the laboratory study results during data preparation. We also proposed a method for establishing a correspondence between the generated modals. The late fusion architecture of the proposed multimodal model was implemented. We conducted experiments on pulmonary data-set with two modalities. Results shows that an increase in accuracy and other parameters were observed for multimodal data fusion using our model in comparison with image only modality and clinical data only modality. It strengthen the fact that multimodality provides more insight for the fusion model to learn and provide a more precise diagnosis than single modality.
科研通智能强力驱动
Strongly Powered by AbleSci AI