强化学习
计算机科学
分布式计算
边缘计算
任务(项目管理)
接头(建筑物)
资源配置
资源管理(计算)
GSM演进的增强数据速率
人工智能
计算机网络
工程类
系统工程
建筑工程
作者
Yan Chen,Yanjing Sun,Hao Yu,Tarik Taleb
标识
DOI:10.1109/tnse.2024.3375374
摘要
Edge servers can collaborate to enhance service capability. However, cloud servers may be unable to execute centralized management due to unpredictable communications. In such systems, distributed task and resource management are vital but challenging due to heterogeneity and various restrictions. Therefore, this paper studies such edge systems and formulates the distributed joint task and computing resource allocation problem for maximizing the quality of experience (QoE). Given the restrictions on real-time state observations and resource management involving other facilities, we decompose it into sub-problems of distributed task allocation and computing resource allocation. After formulating the problem as a partially observed Markov decision process, we propose a two-step approach that depends on multi-agent (MA) deep reinforcement learning. First, each edge server performs a policy to allocate tasks for its associated users according to a partial observation. We employ the MA deep deterministic policy gradient to tackle vast spaces of discrete actions. Besides, we incorporate the action entropy of massive users' task allocation to enhance exploration. Then, we prove that the QoE-maximized computing resource allocation is a problem of maxing a sum of sigmoids, and we address it by sigmoidal programming. Simulation results reveal that the proposed approach dramatically improves the system QoE and reduces the average service latency. Besides, the proposed solution outperforms benchmarks in training and convergence.
科研通智能强力驱动
Strongly Powered by AbleSci AI