人工智能
乳腺癌
计算机科学
特征(语言学)
模式识别(心理学)
特征提取
癌症
医学
内科学
语言学
哲学
作者
Yiyao Liu,Jinyao Li,Cheng Zhao,Yongtao Zhang,Qian Chen,Jing Qin,Lei Dong,Tianfu Wang,Wei Jiang,Baiying Lei
标识
DOI:10.1109/tmi.2024.3485612
摘要
Automatic and accurate classification of breast cancer in multimodal ultrasound images is crucial to improve patients' diagnosis and treatment effect and save medical resources. Methodologically, the fusion of multimodal ultrasound images often encounters challenges such as misalignment, limited utilization of complementary information, poor interpretability in feature fusion, and imbalances in sample categories. To solve these problems, we propose a feature alignment mutual attention fusion method (FAMF-Net), which consists of a region awareness alignment (RAA) block, a mutual attention fusion (MAF) block, and a reinforcement learning-based dynamic optimization strategy(RDO). Specifically, RAA achieves region awareness through class activation mapping and performs translation transformation to achieve feature alignment. When MAF utilizes a mutual attention mechanism for feature interaction fusion, it mines edge and color features separately in B-mode and shear wave elastography images, enhancing the complementarity of features and improving interpretability. Finally, RDO uses the distribution of samples and prediction probabilities during training as the state of reinforcement learning to dynamically optimize the weights of the loss function, thereby solving the problem of class imbalance. The experimental results based on our clinically obtained dataset demonstrate the effectiveness of the proposed method. Our code will be available at: https://github.com/Magnety/Multi_modal_Image.
科研通智能强力驱动
Strongly Powered by AbleSci AI