The widespread proliferation of fake news on the Internet, especially in multi-modal formats, poses a substantial threat to society. Most deep learning-based approaches for fake news detection yield accurate predictions but lack explainability. Existing models focusing on explainability visualize key components from results or generate surface causes via Large Language Models. However, they can hardly provide the deep rationale behind the fabrication of fake news, which is indispensable for misinformation mitigation. Thus, we approach explainability from a different perspective, focusing on explaining how fake news is fabricated, which we term deceptive patterns, at its very source. First, four types of deceptive patterns are pre-established, namely Image Manipulation, Cross-modal Inconsistency, Image Repurposing and Others. Based on this, we propose GE-NSLM, a General Explainable Neuro-Symbolic Latent Model that integrates the power of Large Vision Language Models, which not only provides accurate judgments but also offers insights on deceptive patterns. Specifically, each deceptive pattern is represented as a binary learnable latent variable, obtained through amortized variational inference and weak supervision guided by logical rules. Experiments show GE-NSLM achieves competitive performance. More importantly, it provides interpretable insights into the underlying reasons why specific news items are fake. Our code is available at https://github.com/hedongxiao-tju/GE-NSLM.