The Wasserstein GAN paper made enough of a splash that it’s worth considering separately from the other GAN stuff. Is it even “adversarial”? That looks marginal to me.
Today I’m leading a reading group to the theme what even is is the Wasserstein GAN?
GANs are famous for generating images, but I am interested in their use in simulating from difficult distributions in general.
I will not summarize WGANs better than the following handy sources, so these are the basis for the tutorial until such time as I find myself actually using this stuff in my own work.
- Alexi Pan reads the WGAN paper.
- Mindcodec discusses Wasserstein-type metrics, i.e. optimal transport ones, with an eye to WGAN.
- Here is a deep learning course that culminates in WGAN with some involvement by the authors of the WGAN paper. ArCB17
- Vincent Hermann presents the Kantorovich-Rubinstein duality trick intuitively.
For more ongoing notes, see my WGAN page.