众包
计算机科学
任务(项目管理)
质量(理念)
人机交互
多样性(控制论)
接口(物质)
笔迹
感知
情感(语言学)
关系(数据库)
考试(生物学)
人工智能
心理学
万维网
数据挖掘
工程类
哲学
系统工程
认识论
气泡
沟通
最大气泡压力法
神经科学
并行计算
古生物学
生物
作者
Ailbhe N. Finnerty,Pavel Kucherbaev,Stefano Tranquillini,Gregorio Convertino
标识
DOI:10.1145/2499149.2499168
摘要
Crowdsourcing is emerging as an effective method for performing tasks that require human abilities, such as tagging photos, transcribing handwriting and categorising data. Crowd workers perform small chunks of larger tasks in return for a reward, which is generally monetary. Reward can be one factor for motivating workers to produce higher quality results. Yet, as highlighted by previous research, the task design, in terms of its instructions and user interface, can also affect the workers' perception of the task, thus affecting the quality of results. In this study we investigate both factors, reward and task design, to better understand their role in relation to the quality of work in crowdsourcing. In Experiment 1 we test a variety of reward schemas while in Experiment 2 we measure the effects of the complexity of tasks and interface on attention. The long-term goal is to establish guidelines for designing tasks with the aim to maximize workers' performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI