人工智能
计算机科学
融合
循环神经网络
图像(数学)
图像融合
模式识别(心理学)
计算机视觉
人工神经网络
哲学
语言学
标识
DOI:10.1177/18724981241299605
摘要
Multimodal medical information fusion has emerged as a revolutionary method in intelligent healthcare, allowing complete consideration of patient well-being and tailored treatment strategies. On the other hand, the current approach produces erroneous findings and has problems with the early phases of brain tumour prediction in MRI images. In healthcare, accurate and reliable brain classification of images is essential for diagnosis and strategic decision-making. Currently, semantic gaps are the main problem with brain tumour image classification. To fill the research gap, traditional ML models for classification use handcrafted features, which are low-level yet relatively high-level, and they use intensive approaches for feature extraction and classification. In recent years, substantial improvements have been made in deep learning for automated image classification. Recurrent Neural Networks (RNNs), or deep Convolutional Neural Networks (CNN), have been particularly effective in this multimodal image classification. Hence, this paper presents the Multimodal Fusion Model-assisted Convolutional Neural Network and Recurrent Neural Networks (MFM-CNN-RNN) for automatic image classification in smart healthcare. This study aims to determine if a fusion of CT and MRI brain scans is normal or abnormal. To enhance the accuracy of brain tumour image classification, this method uses the multimodality information within CNNs and RNNs by extracting and fusing unique and complimentary features from different modalities. Within this framework, features have been retrieved using CNN features, while dependencies and classification have been determined using RNN attributes. Because of its design, LSTM excels in time series analysis, which involves processing data in sequential order.
科研通智能强力驱动
Strongly Powered by AbleSci AI