Paper Reading: The Numerics of GANs


Note: this post is only meant for personal digestion and interpretation. It is incomplete and may mislead readers.

Abstract

Background

Divergence Mesasures and GANS

revisit the concept of GAN from a divergence minimization point of view

minθD(p0,qθ)\min_\theta D(p_0, q_\theta)

minθmaxfFExqθ[g1(f(x))]Exp0[g2(f(x))]\min_\theta \max_{f \in \mathcal{F}} \mathrm{E}_{x \sim q_\theta} \left[ g_1 \left( f(x) \right) \right] - \mathrm{E}_{x \sim p_0} \left[ g_2 \left( f(x) \right) \right]

Smooth Two-Players Games

Simultaneous Gradient Ascent

Convergence Theory

Consensus Optimization

L(x)=12v(x)2w(x)=v(x)γΔL(x)\begin{aligned} L(x) &= \frac12 \left\Vert v(x) \right\Vert^2 \\ w(x) &= v(x) - \gamma \Delta L(x) \end{aligned}

[1]: Blog By inFERENCE

[2]: Paper Link


Author: Texot
Reprint policy: All articles in this blog are used except for special statements CC BY 4.0 reprint polocy. If reproduced, please indicate source Texot !
  TOC