意识
社会性
可证伪性
对话的
认知科学
社会智力
人类智力
认识论
心理学
社会学
计算机科学
人工智能
社会心理学
哲学
生态学
生物
出处
期刊:Daedalus
[American Academy of Arts and Sciences]
日期:2022-01-01
卷期号:151 (2): 183-197
被引量:126
摘要
Abstract Large language models (LLMs) represent a major advance in artificial intelligence and, in particular, toward the goal of human-like artificial general intelligence. It is sometimes claimed, though, that machine learning is “just statistics,” hence that, in this grander ambition, progress in AI is illusory. Here I take the contrary view that LLMs have a great deal to teach us about the nature of language, understanding, intelligence, sociality, and personhood. Specifically: statistics do amount to understanding, in any falsifiable sense. Furthermore, much of what we consider intelligence is inherently dialogic, hence social; it requires a theory of mind. Complex sequence learning and social interaction may be a sufficient basis for general intelligence, including theory of mind and consciousness. Since the interior state of another being can only be understood through interaction, no objective answer is possible to the question of when an “it” becomes a “who,” but for many people, neural nets running on computers are likely to cross this threshold in the very near future.
科研通智能强力驱动
Strongly Powered by AbleSci AI