Diffusion models excel at generating high-quality images and are easy to scale, making them highly popular among active users. Meanwhile, diffusion-based text-to-image models have demonstrated significant potential in transferring reference styles. Recently, much research has focused on decoupling the overall style and semantics of reference images, but there has been limited research on balancing style weights from one or multiple reference images. We propose a method for extracting one or more styles from one or more reference images and fusing them together for style-diverse images. We utilize the SAM model to perform semantic segmentation on reference images, extracting the desired style images, and design a parallel decoupling adapter based on an image adapter to simultaneously decouple multiple styles. Additionally, we optimize the encoder to perform more precise style extraction from style reference images while ensuring that style information is not lost. Our method enables multi-visual style prompting without any fine-tuning, and the intensity of each style is controllable. Furthermore, our work demonstrates outstanding visual stylization results, achieving the best balance between style intensity and the controllability of textual elements.