计算机科学
连锁
强化学习
软件部署
分布式计算
计算机网络
供应
服务(商务)
钥匙(锁)
人工智能
软件工程
计算机安全
心理学
心理治疗师
经济
经济
作者
Jiuyun Xu,Xuemei Cao,Qiang Duan,Shibao Li
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-02-01
卷期号:11 (3): 5401-5416
标识
DOI:10.1109/jiot.2023.3306737
摘要
With the rapid development of SDN/NFV technologies, Service Function Chaining (SFC) has become a key enabler for end-to-end service provisioning in future networks. In the Internet of Things (IoT), the highly dynamic nature of the network environment demands flexible and adaptive mechanisms for dynamic SFC deployment to fully utilize network resources while meeting the service requirements. Although reinforcement learning (RL) techniques offer a promising approach to dynamic SFC deployment, the learning delay of RL may limit its prompt response to sudden changes in network state and/or service demand. To address this challenge in this paper, we propose to employ a deep Q-learning network (DQN) method for dynamic SFC deployment combined with a tidal virtual machine (TVM) control mechanism for adaptive VM auto-scaling. We present a tidal DQN framework (TDQNF) that integrates the DQN method and TVM control in the ETSI NFV architecture and develop the algorithms for implementing DQN-based decisions for SFC deployment and TVM control for VM scaling. The performance of the TDQNF framework with the proposed algorithms has been evaluated through extensive simulation experiments. The obtained experimental results verify the effectiveness of the proposed scheme and indicate better performance in terms of system delay, packet loss, and load balancing in large-scale networks compared to existing methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI