Interpretable Multi-Modal Image Registration Network Based on Disentangled Convolutional Sparse Coding

人工智能 计算机科学 可解释性 情态动词 特征提取 卷积神经网络 模式识别(心理学) 特征(语言学) 计算机视觉 RGB颜色模型 深度学习 图像配准 编码(社会科学) 图像(数学) 数学 语言学 化学 哲学 统计 高分子化学
作者
Xin Deng,Enpeng Liu,Shengxi Li,Yiping Duan,Mai Xu
出处
期刊:IEEE transactions on image processing [Institute of Electrical and Electronics Engineers]
卷期号:32: 1078-1091 被引量:58
标识
DOI:10.1109/tip.2023.3240024
摘要

Multi-modal image registration aims to spatially align two images from different modalities to make their feature points match with each other. Captured by different sensors, the images from different modalities often contain many distinct features, which makes it challenging to find their accurate correspondences. With the success of deep learning, many deep networks have been proposed to align multi-modal images, however, they are mostly lack of interpretability. In this paper, we first model the multi-modal image registration problem as a disentangled convolutional sparse coding (DCSC) model. In this model, the multi-modal features that are responsible for alignment (RA features) are well separated from the features that are not responsible for alignment (nRA features). By only allowing the RA features to participate in the deformation field prediction, we can eliminate the interference of the nRA features to improve the registration accuracy and efficiency. The optimization process of the DCSC model to separate the RA and nRA features is then turned into a deep network, namely Interpretable Multi-modal Image Registration Network (InMIR-Net). To ensure the accurate separation of RA and nRA features, we further design an accompanying guidance network (AG-Net) to supervise the extraction of RA features in InMIR-Net. The advantage of InMIR-Net is that it provides a universal framework to tackle both rigid and non-rigid multi-modal image registration tasks. Extensive experimental results verify the effectiveness of our method on both rigid and non-rigid registrations on various multi-modal image datasets, including RGB/depth images, RGB/near-infrared (NIR) images, RGB/multi-spectral images, T1/T2 weighted magnetic resonance (MR) images and computed tomography (CT)/MR images. The codes are available at https://github.com/lep990816/Interpretable-Multi-modal-Image-Registration.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
美满的砖头完成签到 ,获得积分10
1秒前
wangwei完成签到 ,获得积分10
1秒前
潇洒的语蝶完成签到 ,获得积分10
3秒前
SYLH应助cchi采纳,获得10
4秒前
jhlz5879完成签到 ,获得积分10
6秒前
lyang完成签到,获得积分10
7秒前
10秒前
无情的访冬完成签到 ,获得积分10
11秒前
wenhuanwenxian完成签到 ,获得积分10
14秒前
Shengwj完成签到,获得积分10
15秒前
红茸茸羊完成签到 ,获得积分10
20秒前
静心完成签到,获得积分10
23秒前
WuFen完成签到 ,获得积分10
25秒前
孙乐777完成签到,获得积分10
27秒前
mazhihao完成签到 ,获得积分10
27秒前
27秒前
敏er好学完成签到,获得积分10
36秒前
呆萌不正完成签到 ,获得积分10
37秒前
myduty完成签到 ,获得积分10
43秒前
左佐完成签到 ,获得积分10
43秒前
北城完成签到 ,获得积分10
46秒前
冷静剑成完成签到,获得积分10
47秒前
hdc12138完成签到,获得积分10
47秒前
roundtree完成签到 ,获得积分10
52秒前
54秒前
氟锑酸完成签到 ,获得积分10
57秒前
超锅发布了新的文献求助10
59秒前
ly完成签到,获得积分10
1分钟前
firewood完成签到,获得积分10
1分钟前
华仔应助科研通管家采纳,获得10
1分钟前
cdercder应助科研通管家采纳,获得10
1分钟前
wanci应助科研通管家采纳,获得10
1分钟前
CodeCraft应助科研通管家采纳,获得10
1分钟前
cdercder应助科研通管家采纳,获得10
1分钟前
科科通通完成签到,获得积分10
1分钟前
奋斗奋斗再奋斗完成签到,获得积分10
1分钟前
揽星完成签到 ,获得积分10
1分钟前
伯爵完成签到 ,获得积分10
1分钟前
搞什么搞完成签到 ,获得积分10
1分钟前
wangjing11完成签到,获得积分10
1分钟前
高分求助中
Technologies supporting mass customization of apparel: A pilot project 600
Izeltabart tapatansine - AdisInsight 500
Chinesen in Europa – Europäer in China: Journalisten, Spione, Studenten 500
Arthur Ewert: A Life for the Comintern 500
China's Relations With Japan 1945-83: The Role of Liao Chengzhi // Kurt Werner Radtke 500
Two Years in Peking 1965-1966: Book 1: Living and Teaching in Mao's China // Reginald Hunt 500
Epigenetic Drug Discovery 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3815909
求助须知:如何正确求助?哪些是违规求助? 3359386
关于积分的说明 10402465
捐赠科研通 3077245
什么是DOI,文献DOI怎么找? 1690255
邀请新用户注册赠送积分活动 813667
科研通“疑难数据库(出版商)”最低求助积分说明 767743