Back To Schedule
Wednesday, October 27 • 3:30pm - 3:45pm
MaGNET: Uniform Sampling from Deep Generative Network Manifolds without Retraining - Auditorium

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the manifold structure and the distribution of a training dataset. However, the samples from the data manifold used to train a DGN are often obtained based on preferences, costs, or convenience such that they favor certain modes (c.f., the large fraction of smiling faces in the CelebA dataset or the large fraction of dark-haired individuals in FFHQ). These inconsistencies will be reproduced in any data sampled from the trained DGN, which has far-reaching potential implications for fairness, data augmentation, anomaly detection, domain adaptation, and beyond. In response, we develop a differential-geometry-based technique that, given a trained DGN, adapts its generative process so that the distribution on the data generating manifold is uniform. We prove theoretically and validate experimentally that our technique can be used to produce a uniform distribution on the manifold regardless of the training set distribution.

Authors: Ahmed Imtiaz Humayun, Randall Balestriero, and Richard Baraniuk


Ahmed Imtiaz Humayun

Rice University

Wednesday October 27, 2021 3:30pm - 3:45pm CDT

Attendees (1)