计算机科学
分布式计算
弹性(物理)
工作量
资源配置
吞吐量
闲置
背景(考古学)
抽象
计算机网络
操作系统
生物
古生物学
复合材料
哲学
材料科学
认识论
无线
作者
Mingzhen Li,Wencong Xiao,Hailong Yang,Biao Sun,Hanyu Zhao,Shiru Ren,Zhongzhi Luan,Xianyan Jia,Yi Liu,Yong Li,Wei Lin,Depei Qian
标识
DOI:10.1145/3581784.3607054
摘要
Distributed synchronized GPU training is commonly used for deep learning. The resource constraint of using a fixed number of GPUs makes large-scale training jobs suffer from long queuing time for resource allocation, and lowers the cluster utilization. Adapting to resource elasticity can alleviate this but often introduces inconsistent model accuracy, due to lacking of capability to decouple model training procedure from resource allocation. We propose EasyScale, an elastic training system that achieves consistent model accuracy under resource elasticity for both homogeneous and heterogeneous GPUs. EasyScale preserves the data-parallel training behaviors strictly, traces the consistency-relevant factors carefully, utilizes the deep learning characteristics for EasyScaleThread abstraction and fast context-switching. To utilize heterogeneous cluster, EasyScale dynamically assigns workers based on the intra-/inter-job schedulers, minimizing load imbalance and maximizing aggregated job throughput. Deployed in an online serving cluster, EasyScale powers the training jobs to utilize idle GPUs opportunistically, improving overall cluster utilization by 62.1%.
科研通智能强力驱动
Strongly Powered by AbleSci AI