Traditional CNN and Transformer show good performance in medical image segmentation, but both have problems. CNN capture local features with convolution operations but have trouble modeling long-range dependencies. Transformers use self-attention to understand global context but are very computationally expensive. In medical images, lesions have big changes in shape and size, requiring models to include both general shapes and specific local boundaries. This makes it important to combine local and global features well. To solve these problems, we suggest the Multi-Feature Fusion Mamba model (MF-Mamba). The model uses a Multi-Scale Channel Fusion Network (MCFN) to get object features at different scales and capture local context, helping the model segment lesions of different sizes. It also adds a Direction Perception Attention (DPA) module to capture long-range context, improving the network's capability to model long-range interdependencies. Tests on the ISIC2017, ISIC2018, and Synapse public datasets demonstrate that MF-Mamba works much better in skin lesion segmentation tasks, proving it has a strong edge.