计算机科学
可扩展性
软件部署
联合学习
分布式计算
认证(法律)
生成语法
生成模型
无线
物理层
信息隐私
人工智能
计算机网络
分布式学习
频道(广播)
数据建模
人为噪声
图层(电子)
安全通信
机器学习
物联网
计算机安全
数据匿名化
训练集
传播模式
服务器
作者
Z. Tong,Jingjing Wang,Xin Zhang,Chunxiao Jiang,Jianwei Liu,Mérouane Debbah
标识
DOI:10.1109/mwc.2025.3625127
摘要
Generative artificial intelligence (AI) facilitates secure communications by modeling signal and channel characteristics. However, its nature of centralized training raises privacy and scalability challenges. Federated learning (FL) is a decentralized machine learning paradigm that enables collaborative model training across distributed devices while ensuring data privacy by keeping the training data locally. In this paper, we outline the limitations of existing standalone and federated generative models. Furthermore, we propose a federated diffusion model (FDM) that comprehensively considers both the training and sampling phase for IoT scenarios. Then, we explore the applications relying on the proposed framework across various tasks in wireless security. To demonstrate the effectiveness, we provide a case study under a multi-user physical layer authentication scenario. Experimental results show that the proposed FDM substantially matches the performance of centralized diffusion model, while also ensuring secure deployment in distributed IoT environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI