WGANs: A stable alternative to traditional GANs ||  Wasserstein GAN

WGANs: A stable alternative to traditional GANs || Wasserstein GAN

11.042 Lượt nghe
WGANs: A stable alternative to traditional GANs || Wasserstein GAN
In this video, we'll explore the Wasserstein GAN with Gradient Penalty, which addresses the instability issues in traditional GANs. Unlike traditional GANs, WGANs use the Wasserstein distance as their loss function to measure the difference between the real and generated data distributions. The Gradient penalty is used to ensure that the gradients from the discriminator don't explode or vanish. We'll implement the WGAN with Gradient Penalty from scratch and use the anime faces dataset for training. Watch the video to learn how to create this type of GAN and improve its performance. Link to dataset: https://rb.gy/iyolm Link to code: https://github.com/henry32144/wgan-gp-tensorflow/blob/master/WGAN-GP-celeb64.ipynb Instagram: https://www.instagram.com/developershutt/ And as always, Thanks for watching ❤️ Chapters: 0:00 Intro 0:34 Wasserstein distance 1:15 Wasserstein as loss function 2:43 Gradient Penalty (Lipschitz continuity) 4:38 Code from scratch 11:45 Things to remember