计算机科学
强化学习
延迟(音频)
人气
服务器
GSM演进的增强数据速率
马尔可夫决策过程
计算机网络
分布式计算
边缘计算
边缘设备
内容交付
马尔可夫过程
人工智能
电信
操作系统
云计算
统计
社会心理学
数学
心理学
作者
Honghai Wu,Baibing Wang,Huahong Ma,Xiaohui Zhang,Ling Xing
标识
DOI:10.1109/jiot.2024.3392329
摘要
With the rapid advancement of in-vehicle communication technology, vehicular edge caching has garnered considerable attention as a pivotal technology to improve the efficiency of data transmission. However, existing studies often overlook the issues of increased average content access latency and decreased caching hit rate, stemming from the conflict between limited storage space in in-vehicle edge servers and vehicle mobility. To address these issues, this paper proposes a Multi-agent Federated Deep Reinforcement Learning based Collaborative Caching Strategy (MFDRL-CCS), leveraging Vehicle-to-Vehicle (V2V) communications. Specifically, we first perform vehicle connectivity prediction based on Recurrent Neural Network (RNN) considering the characteristics of vehicle nodes and their interrelations. Then, the optimal caching vehicle is selected based on the connectivity between vehicle nodes and the density of vehicle nodes. Meanwhile, a Multi-Head Attention Popularity Prediction (MHAPP) model is also constructed, which amalgamates multi-dimensional features, including historical popularity, social relationships, and geographic location, to predict content popularity. Finally, the edge collaborative caching model is formulated as a Markov Decision Process (MDP). Under the multi-agent competitive deep Q-learning framework, each vehicle learns the optimal caching strategy through an independent Q-network to maximize long-term rewards, and uses federated learning to train the caching replacement algorithm in a distributed manner. Compared to existing caching policies, the caching policy proposed in this paper improves the caching hit rate by approximately 19.8% and reduces the content access latency by about 12.5%.
科研通智能强力驱动
Strongly Powered by AbleSci AI