摘要
Artificial intelligence (AI) empowered by deep learning, has been profoundly transforming the world. However, the excessive size of these models remains a central obstacle that limits their broader utility. Modern neural networks commonly consist of millions of parameters, with foundation models extending to billions. The rapid expansion in model size introduces many challenges including training cost, sluggish inference speed, excessive energy consumption, and negative environmental implications such as increased CO2 emissions. Addressing these challenges necessitates the adoption of efficient deep learning (EDL). The dissertation focuses on two overarching approaches, network sparsity (a.k.a. pruning) and knowledge distillation, to enhance the efficiency of deep learning models in the context of computer vision. Network pruning focuses on eliminating redundant parameters in a model while preserving the performance. Knowledge distillation aims to enhance the performance of the target model, referred to as the "student", by leveraging guidance from a stronger model, known as the "teacher". This approach leads to performance improvements in the target model without reducing its size. In this dissertation, I will start with the background and motivation for more efficient deep learning models in the past several years in the context of the arising foundation models. Then, the basic concepts, goals, and challenges of EDL will be introduced along with the major sub-methods. After that, the major part of this dissertation will be dedicated to elaborating on the proposed efficiency algorithms based on pruning and distillation in a variety of applications. For the pruning part, the dissertation first presents an effective pruning algorithm GReg [27] in image classification, by tapping into a growing regularization strategy. Then, in order to understand the real progress of network pruning, a fairness principle is introduced to fairly compare different pruning methods [32]. The investigation leads us to the central role of network trainability in pruning, which has been largely overlooked by prior works. A trainability-preserving pruning approach, TPP [28], is then proposed to show the merits of maintaining trainability during pruning. A short survey [33] on an emerging new pruning paradigm, pruning at initialization, is then presented to discuss its potential and the connections with the conventional pruning after training. The GReg algorithm is further extended to a low-level vision task, single image super-resolution (SR), to explore the difference of utilizing pruning in low-level vision (SR) vs. high-level vision (image classification). Three efficient SR approaches (ASSL [29], GASSL [30], SRP [34]) are introduced. For the distillation part, the dissertation first focuses on the interaction between knowledge distillation and data augmentation in image classification [35], a proved proposition presented to rigorously understand what defines the "goodness" of a data augmentation scheme in distillation. Next, the dissertation showcases how to employ distillation to significantly improve the inference efficiency for novel view synthesis in 3D vision. Both static scenes [31] and dynamic scenes [36] are considered. Finally, SnapFusion [37] is presented to demonstrate a systematic efficiency optimization of deep models by jointly utilizing pruning and distillation, towards an unprecedentedly fast speed of text-to-image generation based on diffusion models. Finally, a comprehensive summary along with takeaways and outlooks of the future work will conclude the dissertation. Major takeaways include (1) there is no panacea towards efficient deep learning for all tasks; solution is usually case-by-case; (2) there is a clear trend that the efficiency solution for future models (especially the large models) will feature a systematical optimization and co-design in many axes (e.g., hardware, system, and algorithm); (3) profiling is always a good start to understand the problem so as to build the right efficiency portfolio.--Author's abstract