Temporal difference (TD) learning is a fundamental technique in reinforcement learning that updates value function estimates for states or state-action pairs using a TD target. This target represents an improved estimate of the true value by incorporating both immediate rewards and the estimated value of subsequent states. We propose an enhanced multistate TD (MSTD) target that utilizes multiple subsequent states for a more accurate value function estimation compared to traditional TD learning, which relies on a single subsequent state. Building on this new MSTD concept, we develop actor-critic algorithms that include the management of replay buffers in two modes and integrate with deep deterministic policy optimization (DDPG) and soft actor-critic (SAC). Numerical experiment results demonstrate that algorithms employing the MSTD target improve learning performance compared to traditional methods. In addition, we analyze the convergence of Q-learning with MSTD.