Brain-inspired spiking neural networks (SNNs) have garnered significant research attention in algorithm design and perception applications. However, their potential in the decision-making domain, particularly in model-based reinforcement learning, remains underexplored. In reinforcement learning, a world model refers to a predictive model that learns the environment’s dynamics and enables agents to simulate future trajectories in a latent space, thereby improving sample efficiency and long-horizon planning. The difficulty lies in the need for spiking neurons with long-term temporal memory capabilities, as well as network optimization that can integrate and learn information for accurate predictions. The dynamic dendritic information integration mechanism of biological neurons brings us valuable insights for addressing these challenges. In this study, we propose a multicompartment neuron model capable of nonlinearly integrating information from multiple dendritic sources to dynamically process long sequential inputs. Based on this model, we construct a spiking world model (Spiking-WM), which integrates a spiking state-space model, a spiking convolutional encoder, and a fully connected spiking network for policy learning, to enable model-based deep reinforcement learning with SNNs. We evaluated our model using the DeepMind Control Suite, demonstrating that Spiking-WM outperforms existing SNN-based models and achieves performance comparable to artificial neural network-based world models employing Gated Recurrent Units. Furthermore, we assess the long-term memory capabilities of the proposed model in speech datasets, including Spiking Heidelberg Digits dataset, Texas Instruments/Massachusetts Institute of Technology Acoustic-Phonetic Continuous Speech Corpus, and LibriSpeech 100h, showing that our multicompartment neuron model surpasses other SNN-based architectures in processing long sequences.