计算机科学
趋同(经济学)
算法
CTL公司*
凸函数
卷积神经网络
数学优化
正多边形
人工智能
数学
经济增长
生物化学
细胞毒性T细胞
经济
化学
体外
几何学
作者
Zhenni Li,Haoli Zhao,Yongcheng Guo,Zuyuan Yang,Shengli Xie
出处
期刊:IEEE transactions on cybernetics
[Institute of Electrical and Electronics Engineers]
日期:2022-10-01
卷期号:52 (10): 10785-10799
被引量:6
标识
DOI:10.1109/tcyb.2021.3067352
摘要
Convolutional transform learning (CTL), learning filters by minimizing the data fidelity loss function in an unsupervised way, is becoming very pervasive, resulting from keeping the best of both worlds: the benefit of unsupervised learning and the success of the convolutional neural network. There have been growing interests in developing efficient CTL algorithms. However, developing a convergent and accelerated CTL algorithm with accurate representations simultaneously with proper sparsity is an open problem. This article presents a new CTL framework with a log regularizer that can not only obtain accurate representations but also yield strong sparsity. To efficiently address our nonconvex composite optimization, we propose to employ the proximal difference of the convex algorithm (PDCA) which relies on decomposing the nonconvex regularizer into the difference of two convex parts and then optimizes the convex subproblems. Furthermore, we introduce the extrapolation technology to accelerate the algorithm, leading to a fast and efficient CTL algorithm. In particular, we provide a rigorous convergence analysis for the proposed algorithm under the accelerated PDCA. The experimental results demonstrate that the proposed algorithm can converge more stably to desirable solutions with lower approximation error and simultaneously with stronger sparsity and, thus, learn filters efficiently. Meanwhile, the convergence speed is faster than the existing CTL algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI