The Living Thing / Notebooks :

Wasserstein GANs

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧

Spun off from the reading group. The

The Wasserstein GAN paper made enough of a splash that it’s worth considering separately from the other GAN stuff. Is it even “adversarial”? That looks marginal to me.

Today I’m leading a reading group to the theme what even is is the Wasserstein GAN?

GANs are famous for generating images, but I am interested in their use in simulating from difficult distributions in general.

I will not summarize WGANs better than the following handy sources, so these are the basis for the tutorial until such time as I find myself actually using this stuff in my own work.

For more ongoing notes, see my generative adversarial learning page.


Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. “Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc.

Gulrajani, Ishaan, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. “Improved Training of Wasserstein GANs,” March.

Panaretos, Victor M., and Yoav Zemel. 2019. “Statistical Aspects of Wasserstein Distances.” Annual Review of Statistics and Its Application 6 (1): 405–31.