卷积(计算机科学)
瓶颈
二进制数
计算机科学
还原(数学)
计算
卷积神经网络
点(几何)
国家(计算机科学)
算法
人工神经网络
人工智能
理论计算机科学
数学
算术
几何学
嵌入式系统
作者
Chunlei Liu,Yuqi Han,Wenrui Ding,Baochang Zhang
标识
DOI:10.1109/tcsvt.2022.3166803
摘要
In this paper, we find that the conventional convolution operation becomes the bottleneck for extremely efficient binary neural networks (BNNs). To address this issue, we open up a new direction by introducing a reshaped point-wise convolution (RPC) to replace the conventional one to build BNNs. Specifically, we conduct a point-wise convolution after rearranging the spatial information into depth, with which at least $2.25\times $ computation reduction can be achieved. Such an efficient RPC allows us to explore more powerful representational capacity of BNNs under a given computation complexity budget. Moreover, we propose to use a balanced activation (BA) to adjust the distribution of the scaled activations after binarization, which enables significant performance improvement of BNNs. After integrating RPC and BA, the proposed network, dubbed as RB-Net, strikes a good trade-off between accuracy and efficiency, achieving superior performance with lower computational cost against the state-of-the-art BNN methods. Specifically, our RB-Net achieves 66.8% Top-1 accuracy with ResNet-18 backbone on ImageNet, exceeding the state-of-the-art Real-to-Binary Net (65.4%) by 1.4% while achieving more than $3\times $ reduction (52M vs. 165M) in computational complexity.
科研通智能强力驱动
Strongly Powered by AbleSci AI