Three-dimensional reconstruction entails the development of mathematical models of three-dimensional objects that are suitable for computational representation and processing. This technique constructs realistic 3D models of images and has significant practical applications across various fields. This study proposes a rapid and precise multi-view 3D reconstruction method to address the challenges of low reconstruction efficiency and inadequate, poor-quality point cloud generation in incremental structure-from-motion (SFM) algorithms in multi-view geometry. The methodology involves capturing a series of overlapping images of campus. We employed the Scale-invariant feature transform (SIFT) algorithm to extract feature points from each image, applied the KD-Tree algorithm for inter-image matching, and Enhanced autonomous threshold adjustment by utilizing the Random sample consensus (RANSAC) algorithm to eliminate mismatches, thereby enhancing feature matching accuracy and the number of matched point pairs. Additionally, we developed a feature-matching strategy based on similarity, which optimizes the pairwise matching process within the incremental structure from a motion algorithm. This approach decreased the number of matches and enhanced both algorithmic efficiency and model reconstruction accuracy. For dense reconstruction, we utilized the patch-based multi-view stereo (PMVS) algorithm, which is based on facets. The results indicate that our proposed method achieves a higher number of reconstructed feature points and significantly enhances algorithmic efficiency by approximately ten times compared to the original incremental reconstruction algorithm. Consequently, the generated point cloud data are more detailed, and the textures are clearer, demonstrating that our method is an effective solution for three-dimensional reconstruction.