FEditNet: Few-shot Editing of Latent Semantics in GAN Spaces

Published in AAAI, 2023

Abstract: Generative Adversarial networks (GANs) have demonstrated their powerful capability of synthesizing high-resolution images, and great efforts have been made to interpret the semantics in the latent spaces of GANs. However, existing works still have the following limitations: (1) the majority of works rely on either pretrained attribute predictors or large-scale labeled datasets, which are difficult to collect in most cases, and (2) some other methods are only suitable for restricted cases, such as focusing on interpretation of human facial images using prior facial semantics. In this paper, we propose a GAN-based method called FEditNet, aiming to discover latent semantics using very few labeled data without any pretrained predictors or prior knowledge. Specifically, we reuse the knowledge from the pretrained GANs, and by doing so, avoid overfitting during the few-shot training of FEditNet. Moreover, our layer-wise objectives which take content consistency into account also ensure the disentanglement between attributes. Qualitative and quantitative results demonstrate that our method outperforms the state-of-the-art methods on various datasets including CelebA, FFHQ and LSUN.

Download paper here

Recommended citation: Mengfei Xia, Yezhi Shu, Yuji Wang, Yu-Kun Lai, Qiang Li, Pengfei Wan, Zhongyuan Wang, Yong-Jin Liu*. FEditNet: Few-shot Editing of Latent Semantics in GAN Spaces. AAAI 2023.