Purpose Generative AI, like ChatGPT, uses large language models that process human language and learn from patterns identified in large data sets. Despite the great benefits offered by ChatGPT, its privacy risks cannot be ignored. The aim of this study is twofold. First, empirically examine awareness of the different dimensions of ChatGPT’s privacy policy, namely, data collection, data usage, data disclosure and privacy-enhancing features. Second, investigate the impact of users’ awareness of the dimensions of ChatGPT’s privacy policy on their continuous usage intention directly or indirectly through the mediating role of privacy risks and privacy concerns. Design/methodology/approach Data were collected from university students through an online questionnaire and analysed using both SPSS 24.0 and AMOS 29.0. Findings The results of analysing 300 responses from ChatGPT users show that while users are aware of the various data practices associated with ChatGPT, the dimensions of the privacy policy have varying effects on continuous usage intention. Originality/value Although the current literature highlighted the privacy challenges of ChatGPT, existing studies are mainly descriptive and lack empirical investigation of privacy. This research addressed this gap in the literature and empirically examined the impact of the awareness of the dimensions of OpenAI’s privacy policy on users’ continuous intention to use ChatGPT directly or indirectly through the mediating role of privacy risks and privacy concerns. Furthermore, unlike prior literature that uses a general understanding of awareness of the privacy policy, the authors adopt a detailed and comprehensive approach to awareness of a company’s privacy policy that acknowledges its various dimensions.