计算机科学
领域(数学分析)
知识转移
传输(计算)
自然语言处理
数据科学
人工智能
知识管理
数学
并行计算
数学分析
作者
Haoran Li,Xinyuan Zhao,Dadi Guo,Hanlin Gu,Ziqian Zeng,Yuxing Han,Yangqiu Song,Lixin Fan,Qiang Yang
出处
期刊:Cornell University - arXiv
日期:2024-05-23
被引量:3
标识
DOI:10.48550/arxiv.2405.14212
摘要
As large language models (LLMs) demonstrate unparalleled performance and generalization ability, LLMs are widely used and integrated into various applications. When it comes to sensitive domains, as commonly described in federated learning scenarios, directly using external LLMs on private data is strictly prohibited by stringent data security and privacy regulations. For local clients, the utilization of LLMs to improve the domain-specific small language models (SLMs), characterized by limited computational resources and domain-specific data, has attracted considerable research attention. By observing that LLMs can empower domain-specific SLMs, existing methods predominantly concentrate on leveraging the public data or LLMs to generate more data to transfer knowledge from LLMs to SLMs. However, due to the discrepancies between LLMs' generated data and clients' domain-specific data, these methods cannot yield substantial improvements in the domain-specific tasks. In this paper, we introduce a Federated Domain-specific Knowledge Transfer (FDKT) framework, which enables domain-specific knowledge transfer from LLMs to SLMs while preserving clients' data privacy. The core insight is to leverage LLMs to augment data based on domain-specific few-shot demonstrations, which are synthesized from private domain data using differential privacy. Such synthetic samples share similar data distribution with clients' private data and allow the server LLM to generate particular knowledge to improve clients' SLMs. The extensive experimental results demonstrate that the proposed FDKT framework consistently and greatly improves SLMs' task performance by around 5\% with a privacy budget of less than 10, compared to local training on private data.
科研通智能强力驱动
Strongly Powered by AbleSci AI