The accuracy and stability of pathology image segmentation have become critical factors in clinical applications such as cancer screening and tumor grading. However, the presence of complex local structures, uncertain regions, and subtle morphological variations in pathological images continues to pose significant challenges. Most existing feature fusion approaches rely on the simplistic aggregation of extracted features, neglecting the unique characteristics and relative importance of distinct feature representations, which ultimately limits their potential to enhance model performance. To address these issues, we propose a Gestalt-Inspired Feature Integration Network (GeNet), a novel architecture inspired by Gestalt theory that mirrors the human visual system's ability to derive holistic understanding from partial information. Embracing the principle that 'the whole is greater than the sum of its parts,' GeNet introduces a mechanism to synergistically leverage multi-scale information, which assesses the similarity between features to achieve a more meaningful fusion of global context and local detail. Given the variability in target appearance within pathological images, we use information entropy to quantify feature uncertainty, allowing the model to prioritize uncertain regions and reduce the occurrence of ambiguous results. To explicitly eliminate multi-feature redundancy and misalignment, the refinement block utilizes parallel convolutional recalibration to fully leverage the advantages of various features. Extensive experiments on multiple pathological image segmentation datasets, including GlaS, GCaSeg, and EBHI-Seg, demonstrate that GeNet achieves high accuracy and strong robustness, offering a new perspective for joint modeling of global and local features in medical image analysis.