计算机科学
加速
计算
Boosting(机器学习)
梯度升压
简单(哲学)
交替决策树
人工智能
决策树
算法
培训(气象学)
机器学习
理论计算机科学
并行计算
决策树学习
随机森林
认识论
气象学
哲学
物理
增量决策树
作者
Shuang Yu,Guolin Ke,Zhuoming Chen,Shuxin Zheng,Tie-Yan Liu
出处
期刊:Cornell University - arXiv
日期:2022-07-20
标识
DOI:10.48550/arxiv.2207.09682
摘要
Recent years have witnessed significant success in Gradient Boosting Decision Trees (GBDT) for a wide range of machine learning applications. Generally, a consensus about GBDT's training algorithms is gradients and statistics are computed based on high-precision floating points. In this paper, we investigate an essentially important question which has been largely ignored by the previous literature: how many bits are needed for representing gradients in training GBDT? To solve this mystery, we propose to quantize all the high-precision gradients in a very simple yet effective way in the GBDT's training algorithm. Surprisingly, both our theoretical analysis and empirical studies show that the necessary precisions of gradients without hurting any performance can be quite low, e.g., 2 or 3 bits. With low-precision gradients, most arithmetic operations in GBDT training can be replaced by integer operations of 8, 16, or 32 bits. Promisingly, these findings may pave the way for much more efficient training of GBDT from several aspects: (1) speeding up the computation of gradient statistics in histograms; (2) compressing the communication cost of high-precision statistical information during distributed training; (3) the inspiration of utilization and development of hardware architectures which well support low-precision computation for GBDT training. Benchmarked on CPUs, GPUs, and distributed clusters, we observe up to 2$\times$ speedup of our simple quantization strategy compared with SOTA GBDT systems on extensive datasets, demonstrating the effectiveness and potential of the low-precision training of GBDT. The code will be released to the official repository of LightGBM.
科研通智能强力驱动
Strongly Powered by AbleSci AI