工作量
计算机科学
再培训
机器学习
人工智能
人工神经网络
推论
深度学习
性能预测
模拟
操作系统
国际贸易
业务
作者
Kevin Assogba,Eduardo Lima,M. Mustafa Rafique,Minseok Kwon
标识
DOI:10.1109/cluster52292.2023.00009
摘要
Accurately predicting the training time of deep learning (DL) workloads is critical for optimizing the utilization of data centers and allocating the required cluster resources for completing critical model training tasks before a deadline. The state-of-the-art prediction models, e.g., Ernest and Cherrypick, treat DL workloads as black boxes, and require running the given DL job on a fraction of the dataset. Moreover, they require retraining their prediction models every time a change occurs in the given DL workload. This significantly limits the reusability of prediction models across DL workloads with different deep neural network (DNN) architectures. In this paper, we address this challenge and propose a novel approach where the prediction model is trained only once for a particular dataset type, e.g., ImageNet, thus completely avoiding tedious and costly retraining tasks for predicting the training time of new DL workloads. Our proposed approach, called PredictDDL, provides an end-to-end system for predicting the training time of DL models in distributed settings. PredictDDL leverages Graph HyperNetworks, a class of neural networks that takes computational graphs as input and produces vector representations of their DNNs. PredictDDL is the first prediction system that eliminates the need of retraining a performance prediction model for each new DL workload and maximizes the reuse of the prediction model by requiring running a DL workload only once for training the prediction model. Our extensive evaluation using representative workloads shows that PredictDDL achieves up to 9.8× lower average prediction error and 10.3× lower inference time compared to the state-of-the-art system, i.e., Ernest, on multiple DNN architectures.
科研通智能强力驱动
Strongly Powered by AbleSci AI