强化学习
计算机科学
继电器
无线传感器网络
资源配置
计算机网络
能量收集
无线
资源管理(计算)
无线传感器网络中的密钥分配
资源(消歧)
无线网络
能量(信号处理)
分布式计算
电信
人工智能
统计
物理
功率(物理)
量子力学
数学
标识
DOI:10.1109/jiot.2021.3094465
摘要
Green wireless communications have been extensively studied in wireless sensor networks (WSNs), including the use of new energy, renewable energy, and low-power consumption and energy-saving technologies for years. In these networks, due to channel fading, insufficient and random energy arrival, some possible bad deployment of sensors, etc., the communication among sensor nodes in a WSNs will inevitably be affected or even interrupted sometimes, which may result in unacceptable performance in the entire network. In order to solve this problem, we propose a WSN composing of several local subnetworks with amplified forwarding relay and specially designed working time cycle. In this network, we study our resource allocation policies to manage both power and time for throughput maximization. We use deep reinforcement learning (DRL) to develop our resource allocation policies under the model constructed as a Markov decision process for this optimization problem in the subnetwork. We apply an actor–critic strategy to find our optimal solution in continuous state and action space and adaptively achieve maximum throughput of this network based on energy harvesting, causal information of battery state and channel gains. The simulation results demonstrate that the proposed transmission policies can produce higher throughput in the local network and finally improve overall system performance in comparison with greedy policy, random policy, and conservative policy.
科研通智能强力驱动
Strongly Powered by AbleSci AI