最大化
算法
期望最大化算法
高斯分布
数学
最小二乘函数近似
数学优化
稀疏逼近
混合模型
马尔科夫蒙特卡洛
计算机科学
贝叶斯概率
统计
最大似然
量子力学
物理
估计员
作者
Alican Nalci,Igor Fedorov,Maher Al-Shoukairi,Thomas T. Liu,Bhaskar D. Rao
标识
DOI:10.1109/tsp.2018.2824286
摘要
In this paper, we develop a Bayesian evidence maximization framework to solve the sparse non-negative least squares (S-NNLS) problem. We introduce a family of probability densities referred to as the rectified Gaussian scale mixture (R-GSM) to model the sparsity enforcing prior distribution for the solution. The R-GSM prior encompasses a variety of heavy-tailed densities such as the rectified Laplacian and rectified Student's t-distributions with a proper choice of the mixing density. We utilize the hierarchical representation induced by the R-GSM prior and develop an evidence maximization framework based on the expectation-maximization (EM) algorithm. Using the EM based method, we estimate the hyper-parameters and obtain a point estimate for the solution. We refer to the proposed method as rectified sparse Bayesian learning (R-SBL). We provide four R-SBL variants that offer a range of options for computational complexity and the quality of the E-step computation. These methods include the Markov chain Monte Carlo EM, linear minimum mean-square-error estimation, approximate message passing, and a diagonal approximation. Using numerical experiments, we show that the proposed R-SBL method outperforms existing S-NNLS solvers in terms of both signal and support recovery performance, and is also very robust against the structure of the design matrix.
科研通智能强力驱动
Strongly Powered by AbleSci AI