计算机科学
深度学习
正确性
人工智能
趋同(经济学)
深层神经网络
正规化(语言学)
黑匣子
机器学习
迭代重建
理论(学习稳定性)
人工神经网络
管理科学
数据科学
算法
经济
经济增长
作者
Subhadip Mukherjee,Andreas Hauptmann,Ozan Öktem,Marcelo Pereyra,Carola-Bibiane Schönlieb
出处
期刊:IEEE Signal Processing Magazine
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:40 (1): 164-182
被引量:9
标识
DOI:10.1109/msp.2022.3207451
摘要
In recent years, deep learning has achieved remarkable empirical success for image reconstruction. This has catalyzed an ongoing quest for the precise characterization of the correctness and reliability of data-driven methods in critical use cases, for instance, in medical imaging. Notwithstanding the excellent performance and efficacy of deep learning-based methods, concerns have been raised regarding the approaches’ stability, or lack thereof, with serious practical implications. Significant advances have been made in recent years to unravel the inner workings of data-driven image recovery methods, challenging their widely perceived black-box nature. In this article, we specify relevant notions of convergence for data-driven image reconstruction, which forms the basis of a survey of learned methods with mathematically rigorous reconstruction guarantees. An example that is highlighted is the role of input-convex neural networks (ICNNs), offering the possibility to combine the power of deep learning with classical convex regularization theory for devising methods that are provably convergent. This survey article is aimed at both methodological researchers seeking to advance the frontiers of our understanding of data-driven image reconstruction methods as well as practitioners by providing an accessible description of useful convergence concepts and by placing some of the existing empirical practices on a solid mathematical foundation.
科研通智能强力驱动
Strongly Powered by AbleSci AI