Abstract Background: Chat-generative pretrained transformer (ChatGPT) has the potential to offer personalized, effective learning experiences for students, creating realistic virtual simulations for hands-on learning. Objectives: Assessment of efficiency of ChatGPT against subject experts for assessment in BDS curriculum. Methods: A descriptive cross-sectional study was conducted among students of a dental college, in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology guidelines. A self-administered, validated questionnaire was used for Short Answer Questioning (SAQ) and critical questioning. Group comparison was done through an independent sample t -test and statistical significance through the Pearson Chi-square test. A P value of <0.05 was considered significant. Results: The mean score obtained by Group 1 in SAQs was 4.61 ± 0.28, and for Group 2, it was 4.37 ± 0.26. A higher mean score was seen in Group 1 as compared to Group 2, but this difference was not statistically significant (P > 0.05). At the same time, group comparison for critical reasoning revealed that the mean score obtained by Group 1 in critical reasoning was 4.68 ± 0.24, and for Group 2, it was 2.09 ± 1.10. The difference in mean scores between the two groups was statistically significant. Conclusion: Instead of treating artificial intelligence as a threat, dental educators need to adapt teaching and assessments in dental education for the benefit of learners while mitigating its dishonest use.