Dialog Response Generation Using Adversarially Learned Latent Bag-of-Words
MetadataShow full item record
Dialog response generation is the task of generating response utterance given a query utterance. Apart from generating relevant and coherent responses, one would like the dialog generation model to generate diverse and informative sentences. In this work, we propose and explore a novel multi-stage dialog response generation approach. In the first stage of our proposed multi-stage approach, we construct a variational latent space on the bag-of-words representation of the query and response utterances. In the second stage, transformation from query latent code to response latent code is learned using an adversarial process. The final stage involves fine-tuning a pretrained transformer based model called text-to-text transfer (T5) (Raffel et al., 2019) using a novel training regimen to generate the response utterances by conditioning on the query utterance and the response word learned in the previous stage. We evaluate our proposed approach on two popular dialog datasets. Our proposed approach outperforms the baseline transformer model on multiple quantitative metrics including overlap metric (Bleu), diversity metrics (distinct-1 and distinct-2), and fluency metric (perplexity).
Cite this version of the work
Kashif Khan (2020). Dialog Response Generation Using Adversarially Learned Latent Bag-of-Words. UWSpace. http://hdl.handle.net/10012/16188