计算机科学
强化学习
对抗制
人工智能
对手
深度学习
机器学习
人工神经网络
对抗性机器学习
任务(项目管理)
作者
Shaojun Zhang,Chen Wang,Albert Y. Zomaya
出处
期刊:Modeling, Analysis, and Simulation On Computer and Telecommunication Systems
日期:2020-11-17
卷期号:: 1-8
标识
DOI:10.1109/mascots50786.2020.9285955
摘要
A scheduler is essential for resource management in a shared computer cluster, particularly scheduling algorithms play an important role in meeting service level objectives of user applications in large scale clusters that underlie cloud computing. Traditional cluster schedulers are often based on empirical observations of patterns of jobs running on them. It is unclear how effective they are for capturing the patterns of a variety of jobs in clouds. Recent advances in Deep Reinforcement Learning (DRL) promise a new optimization framework for a scheduler to systematically address the problem. A DRL-based scheduler can extract detailed patterns from job features and the dynamics of cloud resource utilization for better scheduling decisions. However, the deep neural network models used by the scheduler might be vulnerable to adversarial attacks. There is limited research investigating the vulnerability in DRL-based schedulers. In this paper, we give a white-box attack method to show that malicious users can exploit the scheduling vulnerability to benefit certain jobs. The proposed attack method only requires minor perturbations job features to significantly change the scheduling priority of these jobs. We implement both greedy and critical path based algorithms to facilitate the attacks to a state-of-the-art DRL based scheduler called Decima. Our extensive experiments on TPC-H workloads show a 62% and 66% success rate of attacks with the two algorithms. Successful attacks achieve a 18.6% and 17.5% completion time reduction.
科研通智能强力驱动
Strongly Powered by AbleSci AI