WGAN implementation from scratch (with gradient penalty)

WGAN implementation from scratch (with gradient penalty)

59.446 Lượt nghe
WGAN implementation from scratch (with gradient penalty)
In this video we implement WGAN and WGAN-GP in PyTorch. Both of these improvements are based on the loss function of GANs and focused specifically on improving the stability of training. Resources and papers: https://www.alexirpan.com/2017/02/22/wasserstein-gan.html https://arxiv.org/abs/1701.07875 https://arxiv.org/abs/1704.00028 ❤️ Support the channel ❤️ https://www.youtube.com/channel/UCkzW5JSFwvKRjXABI-UTAkQ/join Paid Courses I recommend for learning (affiliate links, no extra cost for you): ⭐ Machine Learning Specialization https://bit.ly/3hjTBBt ⭐ Deep Learning Specialization https://bit.ly/3YcUkoI 📘 MLOps Specialization http://bit.ly/3wibaWy 📘 GAN Specialization https://bit.ly/3FmnZDl 📘 NLP Specialization http://bit.ly/3GXoQuP ✨ Free Resources that are great: NLP: https://web.stanford.edu/class/cs224n/ CV: http://cs231n.stanford.edu/ Deployment: https://fullstackdeeplearning.com/ FastAI: https://www.fast.ai/ 💻 My Deep Learning Setup and Recording Setup: https://www.amazon.com/shop/aladdinpersson GitHub Repository: https://github.com/aladdinpersson/Machine-Learning-Collection ✅ One-Time Donations: Paypal: https://bit.ly/3buoRYH ▶️ You Can Connect with me on: Twitter - https://twitter.com/aladdinpersson LinkedIn - https://www.linkedin.com/in/aladdin-persson-a95384153/ Github - https://github.com/aladdinpersson OUTLINE: 0:00 - Introduction 0:27 - Understanding WGAN 6:53 - WGAN Implementation details 9:15 - Coding WGAN 15:50 - Understanding WGAN-GP 18:48 - Coding WGAN-GP 25:29 - Ending