The Wasserstein GAN paper made enough of a splash that it’s worth considering separately from the other GAN stuff. Is it even “adversarial”? That looks marginal to me.
Today I’m leading a reading group to the theme what even is is the Wasserstein GAN?
GANs are famous for generating images, but I am interested in their use in simulating from difficult distributions in general.
I will not summarize WGANs better than the following handy sources, so these are the basis for the tutorial until such time as I find myself actually using this stuff in my own work.
- Alexi Pan reads the WGAN paper.
- Mindcodec discusses Wasserstein-type metrics, i.e. optimal transport ones, with an eye to WGAN.
- Here is a deep learning course that culminates in WGAN with some involvement by the authors of the WGAN paper. ArCB17
- Vincent Hermann presents the Kantorovich-Rubinstein duality trick intuitively.
For more ongoing notes, see my WGAN page.
- GPMX14: Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, … Yoshua Bengio (2014) Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27 (pp. 2672–2680). Cambridge, MA, USA: Curran Associates, Inc.
- GAAD17: Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville (2017) Improved Training of Wasserstein GANs. ArXiv:1704.00028 [Cs, Stat].
- PaZe19: Victor M. Panaretos, Yoav Zemel (2019) Statistical Aspects of Wasserstein Distances. Annual Review of Statistics and Its Application, 6(1), 405–431. DOI
- ArCB17: Martin Arjovsky, Soumith Chintala, Léon Bottou (2017) Wasserstein Generative Adversarial Networks. In International Conference on Machine Learning (pp. 214–223).