鉴定(生物学)
情态动词
手势
凝视
人工智能
计算机科学
机器学习
相互信息
卷积神经网络
植物
生物
化学
高分子化学
作者
Jing Chen,Lu Zhang,Quan Lu,Hui Liu,Shuaipu Chen
标识
DOI:10.1016/j.ipm.2022.103220
摘要
Finding useful health information should be the highest priority when identifying health information. Predicting information usefulness will significantly improve the effectiveness and efficiency of health information identification, which plays a vital role in fighting against misinformation. Modal behaviors, such as gesture and gaze, are promising indicators of usefulness since they deliver a reliable, thorough, natural, and direct process of user cognitive processing. Therefore, this study aimed to use gesture and gaze behaviors to predict whether information is useful for health information identification. Twenty-four college students were recruited to freely search for information using a smartphone to identify the truthfulness of four propositions (two were true and two were false) about public health epidemics. The participants' gesture behavior, gaze behavior, and information usefulness as perceived by themselves were collected. Based on user cognition, the process of information usefulness judgment was placed into two phases: skimming and reading. Thirty-one features derived from modal behaviors in each phase were extracted. Feature optimization based on the Mann-Whitney U test and random forest was performed. Five common algorithms were used to construct information usefulness prediction models, and these models were compared by the F1_score. Finally, dwell time and gaze entropy in the reading phase were the most important gesture and gaze features respectively. BP neural network was selected to build a unimodal model based on gesture, and gradient boosting decision tree was selected to build a unimodal model based on gaze and a multimodal model combining both. These models all achieved F1_score above 77% and were applicable to different scenarios in health information identification. The model based on gesture could satisfy strong technology or legal constrains, the model based on gaze was ideal for AR, MR or metaverse applications, and the model combining both offered an alternative for multimodal human-computer interaction.
科研通智能强力驱动
Strongly Powered by AbleSci AI