强化学习
无线传感器网络
物联网
计算机科学
流量(数学)
钢筋
无线
计算机网络
人工智能
电信
计算机安全
工程类
物理
结构工程
机械
作者
Hrishikesh Dutta,Amit Kumar Bhuyan,Subir Biswas
标识
DOI:10.1109/tgcn.2024.3358230
摘要
Efficient slot allocation and transmit-sleep scheduling is an effective access control mechanism for improving communication performance and network lifetime in resourceconstrained wireless networks. In this paper, a decentralized and multi-tier framework is presented for joint slot allocation and transmit-sleep scheduling in wireless network nodes with thin energy budget. The key learning objectives of this architecture are: collision-free transmission scheduling, reducing energy consumption, and improving network performance. This is achieved using a cooperative and decentralized learning behavior of multiple Reinforcement Learning (RL) agents. The resulting architecture provides throughput-sustainable support for data flows while minimizing energy expenditure and sleep-induced packet losses. To achieve this, a concept of Context is introduced to the RL framework in order to capture network traffic dynamics. The resulting Contextual Deep Q-Learning (CDQL) model makes the system adaptive to dynamic and heterogeneous network load. It also improves energy efficiency when compared with the traditional tabular Q-learning-based approaches. The results demonstrate how this framework can be used for prioritizing application-specific requirements, namely, energy saving and communication reliability. The trade-offs among packet drop, energy expenditure, and learning convergence are studied, and an application-specific solution is proposed for managing them. The performance is compared against an existing state-of-the-art scheduling approach. Moreover, an analytical model of the system dynamics is developed and validated using simulation for arbitrary mesh topologies and traffic patterns.
科研通智能强力驱动
Strongly Powered by AbleSci AI