Users need to disclose personal information in order to obtain accurate services during their interactions with artificial intelligence-generated content (AIGC) platforms. This may increase their privacy concerns and lower the disclosure intention, which may undermine the continuous development of platforms. Based on the privacy calculus theory, this study aims to explore AIGC user information disclosure intention. We conducted an online survey to obtain 446 valid responses, and adopted a mixed method of structural equation modeling (SEM) and fuzzy-set qualitative comparative analysis (fsQCA) to conduct data analysis. The results show that perceived anonymity and privacy statements negatively affect privacy concerns, while perceived anthropomorphism has a positive effect on privacy concerns. Perceived anthropomorphism, perceived anonymity and privacy statements positively affect privacy benefits. Both privacy concerns and privacy benefits predict disclosure intention. In addition, AI literacy strengthens the effects of both privacy concerns and privacy benefits on disclosure intention. The results imply that AIGC platforms need to mitigate privacy concerns and increase privacy benefits in order to promote users’ disclosure and ensure the continuous development of platforms.