ABSTRACT
Recently attention network finding similarity in non-local area within a 2D image has shown outstanding improvement in multi-class generation task in GAN. However it frequently shows unstable training state sometimes falling in mode collapse. We propose cross sample similarity loss to penalize similar features of fake samples that are rarely observed in reals. Proposed method shows improved FID score compared to baseline methods on CelebA, LSUN, and decreased mode collapse on Cifar10[Krizhevsky 2009].
- Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018. Large Scale GAN Training for High Fidelity Natural Image Synthesis. International Conference on Learning Representations (2018).Google Scholar
- Alex Krizhevsky. 2009. Canadian Institute for Advanced Research. Technical Report.Google Scholar
- Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. arXiv preprint arXiv:1711.00937(2017).Google Scholar
- Edgar Schonfeld, Bernt Schiele, and Anna Khoreva. 2020. A U-Net Based Discriminator for Generative Adversarial Networks. In CVPR 2020.Google Scholar
Cross Ref
- Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. 2019. Self-attention generative adversarial networks. In International conference on machine learning. PMLR, 7354–7363.Google Scholar
Index Terms
Cross Sample Similarity for Stable Training of GAN
Recommendations
GAN with autoencoder and importance sampling
SA '18: SIGGRAPH Asia 2018 PostersDeep generative model such as generative adversarial networks (GAN) has shown impressive achievements in computer graphics applications. GAN is trained to learn the distribution of target data and is able to generate new samples similar to the original ...
Improved deep convolutional embedded clustering with re-selectable sample training
Highlights- This paper proposes an improved deep convolutional embedded clustering algorithm using reliable samples.
AbstractThe deep clustering algorithm can learn the latent features of the embedded subspace, and further realize the clustering of samples in the feature space. The existing deep clustering algorithms mostly integrate neural networks and ...
On the convergence and mode collapse of GAN
SA '18: SIGGRAPH Asia 2018 Technical BriefsGenerative adversarial network (GAN) is a powerful generative model. However, it suffers from several problems, such as convergence instability and mode collapse. To overcome these drawbacks, this paper presents a novel architecture of GAN, which ...




Comments