数学
高斯分布
核(代数)
指数函数
高斯函数
算法
随机变量
缩放比例
常量(计算机编程)
应用数学
上下界
希尔伯特空间
离散数学
纯数学
数学分析
计算机科学
几何学
统计
物理
随机变量
量子力学
程序设计语言
作者
Toni Karvonen,Chris J. Oates,Mark Girolami
摘要
The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm for efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use N N evaluations to integrate d d -variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss–Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms exp ( − c 1 N 1 / d ) N 1 / ( 4 d ) \exp (-c_1 N^{1/d}) N^{1/(4d)} and exp ( − c 2 N 1 / d ) N − 1 / ( 4 d ) \exp (-c_2 N^{1/d}) N^{-1/(4d)} , respectively, for positive constants c 1 > c 2 c_1 > c_2 . The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we derive upper bounds of the form exp ( − c 3 N 1 / ( 2 d ) ) \exp (-c_3 N^{1/(2d)}) for a positive constant c 3 c_3 .
科研通智能强力驱动
Strongly Powered by AbleSci AI