模态(人机交互)
手势
计算机科学
手语
表达式(计算机科学)
沉默
符号(数学)
语言学
自然语言处理
任务(项目管理)
语音识别
分割
手动通信
沟通
人工智能
心理学
数学
美学
数学分析
哲学
经济
管理
程序设计语言
作者
Susan Goldin‐Meadow,David McNeill,Jenny L. Singleton
出处
期刊:Psychological Review
[American Psychological Association]
日期:1996-01-01
卷期号:103 (1): 34-55
被引量:177
标识
DOI:10.1037/0033-295x.103.1.34
摘要
Grammatical properties are found in conventional sign languages of the deaf and in unconventional gesture systems created by deaf children lacking language models. However, they do not arise in spontaneous gestures produced along with speech. The authors propose a model explaining when the manual modality will assume grammatical properties and when it will not. The model argues that two grammatical features, segmentation and hierarchical combination, appear in all settings in which one human communicates symbolically with another. These properties are preferentially assumed by speech whenever words are spoken, constraining the manual modality to a global form. However, when the manual modality must carry the full burden of communication, it is freed from the global form it assumes when integrated with speech--only to be constrained by the task of symbolic communication to take on the grammatical properties of segmentation and hierarchical combination.
科研通智能强力驱动
Strongly Powered by AbleSci AI