Show simple item record

dc.contributor.authorPallikara Bahuleyan, Hareesh
dc.date.accessioned2018-08-13 18:56:16 (GMT)
dc.date.available2018-08-13 18:56:16 (GMT)
dc.date.issued2018-08-13
dc.date.submitted2018-08-10
dc.identifier.urihttp://hdl.handle.net/10012/13573
dc.description.abstractAutomatic generation of text is an important topic in natural language processing with applications in tasks such as machine translation and text summarization. In this thesis, we explore the use of deep neural networks for generation of natural language. Specifically, we implement two sequence-to-sequence neural variational models - variational autoencoders (VAE) and variational encoder-decoders (VED). VAEs for text generation are difficult to train due to issues associated with the Kullback-Leibler (KL) divergence term of the loss function vanishing to zero. We successfully train VAEs by implementing optimization heuristics such as KL weight annealing and word dropout. In addition, this work also proposes new and improved annealing schedules that facilitates the learning of a meaningful latent space. We also demonstrate the effectiveness of this continuous latent space through experiments such as random sampling, linear interpolation and sampling from the neighborhood of the input. We argue that if VAEs are not designed appropriately, it may lead to bypassing connections which results in the latent space being ignored during training. We show experimentally with the example of decoder hidden state initialization that such bypassing connections degrade the VAE into a deterministic model, thereby reducing the diversity of generated sentences. We discover that the traditional attention mechanism used in sequence-to-sequence VED models serves as a bypassing connection, thereby deteriorating the model's latent space. In order to circumvent this issue, we propose the variational attention mechanism where the attention context vector is modeled as a random variable that can be sampled from a distribution. We show empirically using automatic evaluation metrics, namely entropy and distinct measures, that our variational attention model generates more diverse output sentences than the deterministic attention model. A qualitative analysis with human evaluation study proves that our model simultaneously produces sentences that are of high quality and equally fluent as the ones generated by the deterministic attention counterpart.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.subjectnaturalen
dc.subjectlanguageen
dc.subjectneuralen
dc.subjectvariationalen
dc.subjectdeep neural networksen
dc.subjectmachine learningen
dc.subjecttext generationen
dc.titleNatural Language Generation with Neural Variational Modelsen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentManagement Sciencesen
uws-etd.degree.disciplineManagement Sciencesen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Applied Scienceen
uws.contributor.advisorVechtomova, Olga
uws.contributor.affiliation1Faculty of Engineeringen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages