适应性
透视图(图形)
适应(眼睛)
数据科学
代表(政治)
计算机科学
比例(比率)
领域(数学)
人工智能
政治学
心理学
地理
生态学
数学
地图学
纯数学
法学
生物
神经科学
政治
作者
Yiheng Liu,Tianle Han,Siyuan Ma,Jiayue Zhang,Yuanyuan Yang,Jiaming Tian,Hao He,Antong Li,Mengshen He,Zhengliang Liu,Zihao Wu,Lin Zhao,Dajiang Zhu,Xiang Li,Qiang Ning,Dingang Shen,Tianming Liu,Bao Ge
出处
期刊:Cornell University - arXiv
日期:2023-01-01
被引量:68
标识
DOI:10.48550/arxiv.2304.01852
摘要
This paper presents a comprehensive survey of ChatGPT-related (GPT-3.5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains. Indeed, key innovations such as large-scale pre-training that captures knowledge across the entire world wide web, instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) have played significant roles in enhancing LLMs' adaptability and performance. We performed an in-depth analysis of 194 relevant papers on arXiv, encompassing trend analysis, word cloud representation, and distribution analysis across various application domains. The findings reveal a significant and increasing interest in ChatGPT-related research, predominantly centered on direct natural language processing applications, while also demonstrating considerable potential in areas ranging from education and history to mathematics, medicine, and physics. This study endeavors to furnish insights into ChatGPT's capabilities, potential implications, ethical concerns, and offer direction for future advancements in this field.
科研通智能强力驱动
Strongly Powered by AbleSci AI