Nagisetty, Vineel2021-08-262021-08-262021-08-262021-08-16http://hdl.handle.net/10012/17272The extensive impact of Deep Neural Networks (DNNs) on various industrial applications and research areas within the last decade can not be overstated. However, they are also subject to notable limitations, namely their vulnerability to various forms of security attacks and their need for excessive data - especially for particular types of DNNs such as generative adversarial networks (GANs). Tackling the former challenge, researchers have proposed several testing, analysis, and verification (TAV) methods for DNNs. However, current state-of-the-art DNN TAV methods are either not scalable to industrial-sized DNNs or are not expressible (i.e. can not test DNNs for a rich set of properties). On the other hand, making GANs more data-efficient is an open area of research, and can potentially lead to improvements in training time and costs. In this work, I address these issues by leveraging domain knowledge - task-specific knowledge provided as an additional source of information - in order to better test and train DNNs. In particular, I present Constrained Gradient Descent (CGD), a novel algorithm (and a resultant tool called CGDTest) that leverages domain knowledge (in the form of logical constraints) to create a DNN TAV method that is both scalable and expressible. Further, I introduce a novel gradient descent method (and a resultant GAN referred to as xAI-GAN) that leverages domain knowledge (provided in the form of neuron importance) to train GANs to be more data-efficient. Through empirical evaluation, I show that both tools improve over current state-of-the-art methods in their respective applications. This thesis highlights the potential of leveraging domain knowledge to mitigate DNN weaknesses and paves the way for further research in this area.endeep neural networksDNN testing and verificationdomain knowledgegenerative adversarial networksDomain Knowledge Guided Testing and Training of Neural NetworksMaster Thesis