Faces created by artificial intelligence (AI) are now considered indistinguishable from real faces. Still, humans vary in their ability to detect these faces-a skill so novel it would have been useless a few years ago. We show that some individuals are consistently better at discriminating real from AI-generated faces. We used latent variable modeling to test whether this ability can be predicted by a domain-general ability, called o, which is measured as the shared variance between perceptual and memory judgments of both novel and familiar objects. We show that o predicts detection of AI-generated faces better than face recognition, intelligence, or experience with AI. An analysis of the relation between performance and cues in the image reveals that people are more likely to be misled by cues from AI faces than from real faces. It also suggests that those with a high o are less cue dependent than those with a low o. The o advantage on our task likely reflects robust visual processing under challenging conditions rather than superior artifact detection. Our results add to a growing literature suggesting that o can predict a wide range of perceptual decisions, including one that lacks evolutionary precedent, providing insights into the cognitive architecture underlying complex perceptual judgments. An understanding of individual differences in AI detection may facilitate interactions between humans and AI, for instance, to optimize training data for generative models. (PsycInfo Database Record (c) 2025 APA, all rights reserved).