This paper aims to advance the Wasserstein Generative Adversarial Networks (WGANs) and their enhancements, particularly focusing on the gradient penalty. Generative Adversarial Networks (GANs), introduced by Goodfellow et al. in 2014, have revolutionized the domain of image generation. To address the limitations of GANs, the WGAN was proposed. However, WGANs rely on weight clipping, which introduces its own set of issues such as slow convergence and potential gradient vanishing. The inefficiency and instability of WGANs have troubled its users. To solve these problems, WGAN with Gradient Penalty (WGAN-GP) was developed to address these challenges. It provides more stable gradients and reduces the risk of mode collapse by using a gradient penalty to enforce the necessary constraints. In this paper, the author implemented both WGAN and WGAN with Gradient Penalty (WGAN-GP) and evaluated them using the CIFAR-10 and MNIST datasets. The results show that WGAN-GP's outputs are more stable and efficient in the early rounds, confirming the effectiveness of the gradient penalty in training image datasets.