Unsupervised text semantic similarity is an important task in natural language processing that aims to learn robust text representations without labeled data. A major challenge in this domain, particularly for contrastive learning methods, is efficiently generating a sufficient number of high-quality positive and negative samples. Existing approaches like SimCSE and PromptBERT have limitations in sample generation efficiency and volume. To address these limitations, we propose a novel dual-mask prompt template strategy that greatly increases the quantity and efficiency of positive and negative sample generation. Our method uniquely allows each template to simultaneously produce a positive and a negative sample. Furthermore, to eliminate noise interference caused by prompt templates, we introduce a simple difference-based noise separation technique. Concurrently, we extend the InfoNCE loss function to optimize the learning of feature space distributions. We used the SentEval toolkit to evaluate our method on seven standard text similarity datasets. The results show that, without any external data augmentation, our method achieves an average Spearman correlation coefficient of 78.52, performing comparably to or surpassing some classical methods. Comprehensive ablation studies rigorously validate the individual effectiveness of our dual-mask prompting, noise separation strategy, and extended loss function.