Rapid progress in Artificial Intelligence (AI) over the past few years has unfolded important questions—and triggered anxiety—about the potential (mis)use of AI-generated content in information warfare and disinformation campaigns. A crucial question remains unanswered, however: whether—and to what extent—this technology can be harnessed in a credible way by extremists, whose language is usually characterized by highly specific subcultural registers that make them unlikely to be easily mimicked by AI models. This paper answers that question, offering the first rigorous evaluation of the credibility of synthetic (AI-generated) extremist prose. Using an expert survey experiment measuring the credibility of fake incel forum posts and ISIS magazines paragraphs generated through an original workflow for the production of synthetic extremist text, we show that these texts have high credibility scores, confusing world-leading experts. These findings, which are discussed under the broader light of the emerging literature on nefarious "dual-use" of synthetic content, not only define and evaluate the threat of extremists harnessing language models for propaganda, but they also more broadly strengthen our understanding of the role language models are due to play in hostile political communication, and ultimately resurface the debate on the human costs of technological progress.