Tong, JunYong2021-04-282021-04-282021-04-282021-04-13http://hdl.handle.net/10012/16912With the advancement of computation capability, in particular the use of graphical processing units, deep learning systems have shown tremendous potential in achieving super-human performance in many computer vision tasks. However, deep learning models are not able to learn continuously in scenarios where the data distribution is non-stationary or imbalanced, because the models suffer from catastrophic forgetting. In this thesis, we propose an Incremental Generative Replay Embedding (IGRE) framework which employs a conditional generator for generative replay at the image embedding level, thus combining the superior performance of replay and reducing the memory complexities for replay at the same time. Alternating backpropagation with Langevin's dynamics was used for efficient and effective training of the conditional generator. We evaluate the proposed IGRE framework on common benchmarks using CIFAR10/100, CUB and ImageNet datasets. Results show that the proposed IGRE framework outperforms state-of-the-art methods on CIFAR-10, CIFAR-100, and the CUB datasets with 6-9\% improvement in accuracy and achieves comparable performance in large-scale ImageNet experiments, while at the same time reducing the memory requirements significantly when compared to conventional replay techniques.enDeep Neural NetworksContinual LearningGenerative ModelsClass Incremental Learning in Deep Neural NetworksMaster Thesis