强化学习
计算机科学
云计算
资源配置
钢筋
资源(消歧)
分布式计算
能量(信号处理)
人工智能
计算机网络
工程类
操作系统
统计
数学
结构工程
作者
Haoran Li,Gaozhao Wang,Lin Li,Jiayi Wang
标识
DOI:10.60087/jaigs.v1i1.243
摘要
This paper presents a new deep learning (DRL) framework for resource allocation and optimization in cloud computing. The proposed method leverages the multi-agent DRL architecture to address extensive decision-making processes in large cloud environments. We formulate the problem based on Markov's decision, creating a state space that includes the use of resources, work characteristics, and energy. The workspace comprises VM placement, migration, and physical power state determination. Careful reward work balances energy, efficiency, and resource utilization goals. We modify the Proximal Policy Optimization algorithm to handle the heterogeneous workspace and include advanced training techniques such as priority recursion and learning data. Simulations using real-world signals show that our method outperforms conventional and single-agent DRL methods, achieving a 25% reduction in the usage of electricity while maintaining a 2.5% SLA violation. The framework is adaptable to different work patterns and scales well to large data set environments. A global study further proves the proposal's validity, showing a significant improvement in energy consumption and efficiency compared to commercial management systems already there.
科研通智能强力驱动
Strongly Powered by AbleSci AI