摘要
Purpose This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from previous research that primarily viewed AI chatbots from a non-anthropomorphic approach, this paper contends that AI chatbots are taking on an emotional component for humans. This study thus explores this topic by considering both rational and non-rational perspectives, thereby providing a more comprehensive understanding of user behavior in digital environments. Design/methodology/approach Employing a questionnaire survey (N = 480), this research focuses on young users who regularly engage with AI chatbots. Drawing upon the parasocial interaction theory and privacy calculus theory, the study elucidates the mechanisms governing users’ willingness to disclose information. Findings Findings show that cognitive, emotional and behavioral dimensions all positively influence perceived benefits of using ChatGPT, which in turn enhances privacy disclosure. While cognitive, emotional and behavioral dimensions negatively impact perceived risks, only the emotional and behavioral dimensions significantly affect perceived risk, which in turn negatively influences privacy disclosure. Notably, the cognitive dimension’s lack of significant mediating effect suggests that users’ awareness of privacy risks does not deter disclosure. Instead, emotional factors drive privacy decisions, with users more likely to disclose personal information based on positive experiences and engagement with ChatGPT. This confirms the existence of the privacy paradox. Research limitations/implications This study acknowledges several limitations. While the sample was adequately stratified, the focus was primarily on young users in China. Future research should explore broader demographic groups, including elderly users, to understand how different age groups engage with AI chatbots. Additionally, although the study was conducted within the Chinese context, the findings have broader applicability, highlighting the potential for cross-cultural comparisons. Differences in user attitudes toward AI chatbots may arise due to cultural variations, with East Asian cultures typically exhibiting a more positive attitude toward social AI systems compared to Western cultures. This cultural distinction—rooted in Eastern philosophies such as animism in Shintoism and Buddhism—suggests that East Asians are more likely to anthropomorphize technology, unlike their Western counterparts (Yam et al., 2023; Folk et al., 2023). Practical implications The findings of this study offer valuable insights for developers, policymakers and educators navigating the rapidly evolving landscape of intelligent technologies. First, regarding technology design, the study suggests that AI chatbot developers should not focus solely on functional aspects but also consider emotional and social dimensions in user interactions. By enhancing emotional connection and ensuring transparent privacy communication, developers can significantly improve user experiences (Meng and Dai, 2021). Second, there is a pressing need for comprehensive user education programs. As users tend to prioritize perceived benefits over risks, it is essential to raise awareness about privacy risks while also emphasizing the positive outcomes of responsible information sharing. This can help foster a more informed and balanced approach to user engagement (Vimalkumar et al., 2021). Third, cultural and ethical considerations must be incorporated into AI chatbot design. In collectivist societies like China, users may prioritize emotional satisfaction and societal harmony over privacy concerns (Trepte, 2017; Johnston, 2009). Developers and policymakers should account for these cultural factors when designing AI systems. Furthermore, AI systems should communicate privacy policies clearly to users, addressing potential vulnerabilities and ensuring that users are aware of the extent to which their data may be exposed (Wu et al., 2024). Lastly, as AI chatbots become deeply integrated into daily life, there is a growing need for societal discussions on privacy norms and trust in AI systems. This research prompts a reflection on the evolving relationship between technology and personal privacy, especially in societies where trust is shaped by cultural and emotional factors. Developing frameworks to ensure responsible AI practices while fostering user trust is crucial for the long-term societal integration of AI technologies (Nah et al., 2023). Originality/value The study’s findings not only draw deeper theoretical insights into the role of emotions in generative artificial intelligence (gAI) chatbot engagement, enriching the emotional research orientation and framework concerning chatbots, but they also contribute to the literature on human–computer interaction and technology acceptance within the framework of the privacy calculus theory, providing practical insights for developers, policymakers and educators navigating the evolving landscape of intelligent technologies.