心理学
心理健康
应用心理学
计算机科学
心理治疗师
作者
Mohammad Rahimi,Antino Kim,Sezgin Ayabakan,Alan R. Dennis
标识
DOI:10.25300/misq/2025/18555
摘要
Only a fraction of people with mental health issues seek medical care, in part because of fear of judgment, so deploying text-based conversational agents (i.e., chatbots) for mental health screening is often viewed as a way to lower barriers to mental health care. We conducted four experiments and a qualitative study and, contrary to common assumptions, consistently found that participants perceived a text-based chatbot as more judgmental than a human mental health care professional, even though the interactions were identical. This greater judgmentalness reduced the willingness to use the service, disclose information, and follow the agent’s recommendations. Participants described judgmentalness as a rush to judgment without fully grasping the issues. The chatbot was perceived as more judgmental because it was less capable of deeply understanding the issues (e.g., emotionally and socially) and conveying a sense of being heard and validated. It has long been assumed that chatbots can address the real or imagined fear of being judged by others for stigmatized conditions like mental health. Our study shows that perceptions of judgmentalness are actually the opposite of what has been assumed and that these perceptions significantly influence patients’ acceptance of chatbots for mental health screening.
科研通智能强力驱动
Strongly Powered by AbleSci AI