计算机科学
卷积神经网络
冗余(工程)
上下文图像分类
人工智能
量化(信号处理)
计算
人工神经网络
软件部署
卷积(计算机科学)
核(代数)
计算机工程
机器学习
数据挖掘
模式识别(心理学)
图像(数学)
算法
数学
组合数学
操作系统
作者
Ying Liu,Peng Xiao,Jie Fang,Dengsheng Zhang
标识
DOI:10.1109/icnc-fskd59587.2023.10281072
摘要
In recent years, deep neural networks have achieved tremendous success in image classification in both academic and industrial settings. However, the high hardware requirements imposed by their intensive and complex computations pose a challenge for deployment on low-storage devices. To address this challenge, lightweight networks provide a viable solution. This paper provides a detailed review of recent lightweight image classification algorithms, which can be categorized into low-redundancy network model design and neural network compression algorithms. The former reduces network computations by replacing traditional convolution with efficient lightweight convolution, while the latter reduces redundancy in the network by employing methods such as network pruning, knowledge distillation, and parameter quantization. We summarize the experimental results of some classical models and algorithms on ImageNet2012 and CIFAR-10 datasets, and analyze the characteristics, advantages and disadvantages of these models respectively. Finally, future research directions for lightweight algorithms in the field of image classification are identified.
科研通智能强力驱动
Strongly Powered by AbleSci AI