Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond

计算机科学 数据科学
作者
Jianhua Yang,Hongye Jin,Ruixiang Tang,Xiaotian Han,Qizhang Feng,Haoming Jiang,Bing Yin,Xia Hu
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2304.13712
摘要

This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current GPT- and BERT-style LLMs. Then, we discuss the influence of pre-training data, training data, and test data. Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, natural language generation tasks, emergent abilities, and considerations for specific tasks.We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios. We also try to understand the importance of data and the specific challenges associated with each NLP task. Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency, to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found at \url{https://github.com/Mooler0410/LLMsPracticalGuide}.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
我心飞扬完成签到 ,获得积分10
1秒前
石中酒完成签到 ,获得积分10
1秒前
CodeCraft应助江江师兄采纳,获得10
1秒前
5秒前
6秒前
8秒前
8秒前
8秒前
Siwen发布了新的文献求助10
10秒前
醉熏的冰绿完成签到,获得积分10
11秒前
李霞完成签到,获得积分10
11秒前
MatrixCore完成签到,获得积分20
11秒前
11秒前
刘口水完成签到 ,获得积分10
12秒前
华仔应助Siwen采纳,获得10
14秒前
cyuan发布了新的文献求助10
14秒前
江江师兄发布了新的文献求助10
15秒前
3kou发布了新的文献求助10
17秒前
17秒前
21秒前
qing1245给qing1245的求助进行了留言
23秒前
MatrixCore发布了新的文献求助30
25秒前
cctv18应助Prime采纳,获得10
25秒前
CipherSage应助Prime采纳,获得10
25秒前
cctv18应助Prime采纳,获得10
25秒前
cctv18应助Prime采纳,获得10
25秒前
cctv18应助Prime采纳,获得10
25秒前
坚强的广山应助Prime采纳,获得10
25秒前
cctv18应助Prime采纳,获得10
25秒前
江江师兄完成签到,获得积分10
27秒前
Makinosaito发布了新的文献求助10
27秒前
汉堡包应助3kou采纳,获得10
29秒前
BEST完成签到 ,获得积分10
29秒前
cctv18应助波比晨采纳,获得10
31秒前
潇洒的秋荷完成签到,获得积分10
31秒前
35秒前
学渣小林完成签到 ,获得积分10
36秒前
深情安青应助Makinosaito采纳,获得10
39秒前
情怀应助和谐的曼易采纳,获得10
39秒前
就是花菜完成签到,获得积分10
40秒前
高分求助中
The three stars each : the Astrolabes and related texts 1070
Hieronymi Mercurialis de Arte Gymnastica Libri Sex: In Quibus Exercitationum Omnium Vetustarum Genera, Loca, Modi, Facultates, Et Quidquid Deniq. Ad ... Diligenter Explicatur (Classic Reprint) Paperback – 23 April 2018 1000
Manual of Clinical Microbiology, 4 Volume Set (ASM Books) 13th Edition 1000
Hieronymi Mercurialis Foroliviensis De arte gymnastica libri sex: In quibus exercitationum omnium vetustarum genera, loca, modi, facultates, & ... exercitationes pertinet diligenter explicatur Hardcover – 26 August 2016 900
De arte gymnastica. The art of gymnastics 600
Boris Pesce - Gli impiegati della Fiat dal 1955 al 1999 un percorso nella memoria 500
[Lambert-Eaton syndrome without calcium channel autoantibodies] 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2402969
求助须知:如何正确求助?哪些是违规求助? 2102028
关于积分的说明 5302631
捐赠科研通 1829598
什么是DOI,文献DOI怎么找? 911799
版权声明 560421
科研通“疑难数据库(出版商)”最低求助积分说明 487447