Vadehra, Ankit2018-10-192018-10-192018-10-192018-10-16http://hdl.handle.net/10012/14026The popularity of deep neural networks and vast amounts of readily available multi-domain textual data has seen the advent of various domain/task specific and domain agnostic dialogue systems. In our work, we present a general dialogue system that can provide a custom response based on the emotion or sentiment label selected. A dialogue system that can vary its response based on different affect labels can be very helpful for designing help-desk or social help assistant systems where the response has to follow a certain affective tone, such as positive, compassionate, etc. To address this task, we design a model that can generate coherent response utterances conditioned on a specified affect label (emotion or sentiment). We use a Sequence-to-Sequence model with an adversarial objective to remove affect from the learned representation of the input utterance, and generate the response based on this representation and the target affect label. Two models were evaluated: affect embedding and multi-decoder. We hypothesize that removal of the affect from the input utterance is helpful in generating a response conditioned on a different affect label. The models were evaluated on a large Twitter dialogue corpus. The results support our hypothesis.enNLPDialogue SystemStyle TransferCreating an Emotion Responsive Dialogue SystemMaster Thesis