Facial beauty prediction (FBP) is a cutting-edge task in deep learning that aims to equip machines with the ability to assess facial attractiveness in a human-like manner. In human perception, facial beauty is strongly associated with facial symmetry, where balanced structures often reflect aesthetic appeal. Leveraging symmetry provides an interpretable prior for FBP and offers geometric constraints that enhance feature learning. However, existing multi-task FBP models still face challenges such as limited annotated data, insufficient frequency–temporal modeling, and feature conflicts from task heterogeneity. The Mamba model excels in feature extraction and long-range dependency modeling but encounters difficulties in parameter sharing and computational efficiency in multi-task settings. In contrast, mixture-of-experts (MoE) enables adaptive expert selection, reducing redundancy while enhancing task specialization. This paper proposes MoMamba, a multi-task decoder combining Mamba’s state-space modeling with MoE’s dynamic routing to improve multi-scale feature fusion and adaptability. A detail enhancement module fuses high- and low-frequency components from discrete cosine transform with temporal features from Mamba, and a state-aware MoE module incorporates low-rank expert modeling and task-specific decoding. Experiments on SCUT-FBP and SCUT-FBP5500 demonstrate superior performance in both classification and regression, particularly in symmetry-related perception modeling.