计算机科学
任务(项目管理)
人工智能
意义(存在)
自然语言处理
认知心理学
数据科学
心理学
心理治疗师
经济
管理
作者
Sebastian Farquhar,Jannik Kossen,Lorenz Kuhn,Yarin Gal
出处
期刊:Nature
[Nature Portfolio]
日期:2024-06-19
卷期号:630 (8017): 625-630
被引量:84
标识
DOI:10.1038/s41586-024-07421-0
摘要
Abstract Large language model (LLM) systems, such as ChatGPT 1 or Gemini 2 , can show impressive reasoning and question-answering capabilities but often ‘hallucinate’ false outputs and unsubstantiated answers 3,4 . Answering unreliably or without the necessary information prevents adoption in diverse fields, with problems including fabrication of legal precedents 5 or untrue facts in news articles 6 and even posing a risk to human life in medical domains such as radiology 7 . Encouraging truthfulness through supervision or reinforcement has been only partially successful 8 . Researchers need a general method for detecting hallucinations in LLMs that works even with new and unseen questions to which humans might not know the answer. Here we develop new methods grounded in statistics, proposing entropy-based uncertainty estimators for LLMs to detect a subset of hallucinations—confabulations—which are arbitrary and incorrect generations. Our method addresses the fact that one idea can be expressed in many ways by computing uncertainty at the level of meaning rather than specific sequences of words. Our method works across datasets and tasks without a priori knowledge of the task, requires no task-specific data and robustly generalizes to new tasks not seen before. By detecting when a prompt is likely to produce a confabulation, our method helps users understand when they must take extra care with LLMs and opens up new possibilities for using LLMs that are otherwise prevented by their unreliability.
科研通智能强力驱动
Strongly Powered by AbleSci AI