摘要
With the continuous growth of the online shopping industry, determining how a particular garment would appear on us is challenging. To overcome this, this project presents a web-based application utilizing Machine Learning and Deep Learning techniques. This application involves the use of DeepLabV3+ (to segment the user’s body from the background and identify key regions like the torso and arms), Mediapipe (for pose estimation, detecting body keypoints to ensure the clothing aligns correctly with the user’s posture), Thin Plate Spline (TPS) (to warp the virtual clothing to fit the user’s body shape and pose accurately), Neural Style Transfer (for applying texture details to the fitted garments), StyleGAN (for high-quality and realistic outputs), and real-time rendering and deployment. By combining these techniques, this project aims to provide the best output for users, easing their online shopping experience by offering the comfort of offline shopping. This integration leverages the best of technology and ideology to provide a seamless experience for users. Introduction: This virtual clothes try on web application is used to virtually try on the clothes that the users are planning to purchase. This will help the users in many ways like increasing and enhancing the shopping experience online and overcoming the disadvantages that come from the same. Now the users don't have to roam from shop to shop to get they desired outfits. This also helps in getting the idea of how an outfit would look on us in an hygienic manner. As in offline stores, many people try on the same clothes that the others are trying, this compromises with the hygiene. Also, if a shop ishaving a high rush of people one needs to stand in a queue near the billing counter to get our clothing items.The sellers of the clothing items in the online platforms are also benefited as their return rates decreases Objectives: The objective of this project is to create a virtual try on system that improves the overall user experience of online shopping and fashion retails. This helps to precisely understand how a particular garment would look on them without physically trying them. The main focus of this project is to enhance critical areas like precisely implementing image segmentation, pose estimation, garment warping and to provide an overall seamless experience to the users. This research also helps in improving the image synthesis algorithms to support the overall experience and provide a realistic quality outcome to the users. Coming along with the user friendly and interactive interface, this system provides an interactive experience for online consumers. Methods:. The virtual try-on system uses a combination of advanced deep learning algorithms to produce high-end realistic results. This process includes Image segmentation, Pose estimation, Garment fitting and warping, Texture mapping and Image synthesis. The process of Image Segmentation is done using the DeepLabV3+, that separates the person from the background and detects body area where clothing will be fitted. After that, image segmentation process is performed using media pipe pose estimation that helps in identifying 33 key points on the human body. This estimation helps in identifying the structural representation of posture so that the virtual clothing is properly aligned on to the human body the garment fitting and warping is the third step which is done by using the algorithm of Thin Plate Spline that helps in transferring the control points from pose estimation to the clothing image. This process allows the clothes to be stretched and form in the shape of the shape of the human. Next to that, texture mapping is done on the individuals body this helps in highlighting the texture fabric and material of the cloth exactly as the original one using Neural Style Transfer. This helps in providing the result in a more realistic manner. Results:. The virtual try-on model performed well on different measurement criteria. DeepLabV3+ segmentation of images acquired a maximum accuracy, recall, F1-score, and Mean IoU of 1.0000, resulting in accurate segmentation of clothing and limbs. Pose estimation recorded an accuracy value of 1.0000, PCK score of 0.91, and MPJPE of 44.42 pixels, reflecting accurate landmark localisation. The Thin Plate Spline (TPS) warping method attained a Structural Similarity Index (SSIM) of 0.8329 and Mean Squared Error (MSE) of 32.1695, verifying successful garment fitting. Texture mapping exhibited style loss of 0.0013 and perceptual loss of 0.2236, maintaining detail of the fabrics. Image synthesis, tested through the Inception Score (1.00 ± 0.00) and FID Score (0.0069), produced photorealistic outcomes. These results confirm the system's ability to provide realistic and accurate virtual clothing visualization. Conclusions: This research effectively created a high-accuracy virtual try-on system based on deep learning and computer vision methods. The outcomes provide better performance in segmentation, pose estimation, garment fitting, texture mapping, and image synthesis. The system properly improves user experience through realistic clothes visualization and enhancing garment alignment. High accuracy in segmentation and pose estimation guarantees accurate garment placement, while warping and texture mapping preserves natural fabric look. The application of neural rendering methods further enhances the ultimate output, yielding a smooth virtual try-on. The technology is part of lowering online fashion return rates and supporting sustainable shopping behaviors. Real-time rendering and support for multi-layer garments are areas that can be improved in the future to allow for greater flexibility. In conclusion, this work develops AI-powered fashion technologies that transform online purchasing and digital fashion experiences