Towards Explainable Generative Adversarial Networks

Loading...
Thumbnail Image

Date

2022-05-09

Authors

Yu, Xiaozhuo

Advisor

Karray, Fakhri

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

As Generative Adversarial Networks become more and more popular for sample generation, the demand for human interpretable explanations have also skyrocketed. With the rising popularity of Generative Adversarial Networks (GANs) in generating synthetic data, time series are no exception to this trend. In this work, not only we tackle these two open challenges, we also provide a comparison of GAN usages for data augmentation. In the first challenge, our work demonstrates that while explainable frameworks can be used to provide insights into the Discriminator module, the explanations provided are not enough. To provide deeper insights and analysis we visualize and analyze the Discriminator to explain why object classes can be omitted resulting in mode-dropping or mode collapse. We also create a new "Discriminative Score" for each object, and we show that their distribution is correlated with this score. Finally, we performed an experiment to determine whether missing details are a result of the architecture or the dataset. In the case of conditional GAN, we discovered that the embedding space can reveal human interpretable semantics that can be manipulated through Principal Component Analysis directions to fine control the generated sample. In tackling time series generation, we proposed two novel loss functions sDTW-p and sDTW-m based on Soft-Dynamic Time Warping that can be used to improve the generated time series without modifications to the existing architecture. We also present the first evaluation of the generated samples across different sequence length. We show empirically that the result of leveraging our loss functions can lead to a 9\% improvement according to our metric. Lastly, our findings in data augmentations revealed that traditional methods for Convolutional classifiers can be used to improve the training and usage of GANs. Normalization using custom mean and std was found to improve the Fréchet inception distance of the generated sample while having GAN generate data augmented version of the samples can help improve the base classifier when compared to using data augmentation on the generated images directly.

Description

Keywords

machine learning, deep learning, generative adversarial networks, explainable AI

LC Keywords

Citation