量化(信号处理)
计算机科学
人工神经网络
卷积神经网络
二进制数
算法
计算
深层神经网络
均方误差
人工智能
数学
算术
统计
作者
Jinjie Zhang,Yong‐Gui Zhou,Rayan Saab
出处
期刊:SIAM journal on mathematics of data science
[Society for Industrial and Applied Mathematics]
日期:2023-05-31
卷期号:5 (2): 373-399
被引量:4
摘要
While neural networks have been remarkably successful in a wide array of applications, implementing them in resource-constrained hardware remains an area of intense research. By replacing the weights of a neural network with quantized (e.g., 4-bit, or binary) counterparts, massive savings in computation cost, memory, and power consumption are attained. To that end, we generalize a post-training neural network quantization method, GPFQ, that is based on a greedy path-following mechanism. Among other things, we propose modifications to promote sparsity of the weights, and rigorously analyze the associated error. Additionally, our error analysis expands the results of previous work on GPFQ to handle general quantization alphabets, showing that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights, i.e., level of overparametrization. Our result holds across a range of input distributions and for both fully connected and convolutional architectures thereby also extending previous results. To empirically evaluate the method, we quantize several common architectures with few bits per weight, and test them on ImageNet, showing only minor loss of accuracy compared to unquantized models. We also demonstrate that standard modifications, such as bias correction and mixed precision quantization, further improve accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI