Inclusive GAN:
Improving Data and Minority Coverage in Generative Models

ECCV 2020


Ning Yu1,2      Ke Li3,5,6      Peng Zhou1      Jitendra Malik3      Larry Davis1      Mario Fritz4
1. University of Maryland      2. Max Planck Institute for Informatics      3. University of California, Berkeley     
4. CISPA Helmholtz Center for Information Security      5. Institute for Advanced Study      6. Google

Abstract


Generative Adversarial Networks (GANs) have brought about rapid progress towards generating photorealistic images. Yet the equitable allocation of their modeling capacity among subgroups has received less attention, which could lead to potential biases against underrepresented minorities if left uncontrolled. In this work, we first formalize the problem of minority inclusion as one of data coverage, and then propose to improve data coverage by harmonizing adversarial training with reconstructive generation. The experiments show that our method outperforms the existing state-of-the-art methods in terms of data coverage on both seen and unseen data. We develop an extension that allows explicit control over the minority subgroups that the model should ensure to include, and validate its effectiveness at little compromise from the overall performance on the entire dataset.

Demos

Optimization for image reconstruction

Minority reconstruction

Interpolation from majority to minority

Eyeglasses

                               Majority real                 StyleGAN2                  Ours general              Ours minority               Minority real

Bald

                               Majority real                 StyleGAN2                  Ours general              Ours minority               Minority real

Narrow_Eyes&Heavy_Makeup

                               Majority real                 StyleGAN2                  Ours general              Ours minority               Minority real

Bags_Under_Eyes&High_Cheekbones&Attractive

                               Majority real                 StyleGAN2                  Ours general              Ours minority               Minority real

Video


Paper

Code

Press coverage


thejiangmen Academia News

Citation

@inproceedings{yu2020inclusive,
  author={Yu, Ning and Li, Ke and Zhou, Peng and Malik, Jitendra and Davis, Larry and Fritz, Mario},
  title={Inclusive GAN: Improving Data and Minority Coverage in Generative Models},
  journal={European Conference on Computer Vision (ECCV)},
  year={2020},
}

Acknowledgement


We thank Richard Zhang and Dingfan Chen for constructive advice. This project was partially funded by DARPA MediFor program under cooperative agreement FA87501620191 and by ONR MURI N00014-14-1-0671. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the DARPA or ONR MURI.

Related Work


T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, T. Aila. Analyzing and Improving the Image Quality of StyleGAN. CVPR 2020.
Comment: A state-of-the-art GAN baseline method that is used as our generative backbone.
K. Li, J. Malik. Implicit Maximum Likelihood Estimation. arXiv 2018.
Comment: A reconstruction-based baseline method that is used in our reconstruction term.
R. Zhang, P. Isola, A. Efros, E. Shechtman, O. Wang. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. CVPR 2018.
Comment: A deep image similarity metric, which is used to formulate our reconstruction term and harmonize reconstruction with adversarial training.
A. Larsen, S. Sønderby, H. Larochelle, O. Winther. Autoencoding beyond pixels using a learned similarity metric. ICML 2016.
Comment: A GAN baseline method that uses image reconstruction to improve mode collapse in GAN training.
A. Srivastava, L. Valkov, C. Russell, M. Gutmann, C. Sutton. VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning. NeurIPS 2017.
Comment: A GAN baseline method that uses latent reconstruction to improve mode collapse in GAN training.