AI-generated news is becoming increasingly prevalent on social media, raising questions about how audiences perceive, evaluate, and engage with this content. This study examines how AI authorship labeling (vs. human labeling) and content type (factual, conspiratorial, partisan) jointly shape readers’ cognitive, affective, and behavioral responses. Across three experiments ( N = 1,600), results show that readers could not reliably detect AI authorship without disclosure, although their evaluations nonetheless diverged depending on content type. When content was labeled as “AI-generated,” it consistently received lower cognitive and affective evaluations—regardless of factual accuracy—indicating a robust AI aversion effect. Crucially, emotionally charged content (conspiratorial or partisan) attenuated this effect. Such content elicited stronger engagement and sharing intentions, even when attributed to AI, suggesting that emotional resonance can override source skepticism. These findings support a dual-processing account: while authorship cues shape cognitive judgments, emotional content drives affective and behavioral responses. The results challenge the effectiveness of binary AI transparency labels and underscore the need for context-sensitive disclosure practices in journalism. They also reveal new challenges for platform governance and digital literacy, especially in curbing emotionally manipulative AI-generated content on social media.