UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Disentangled Representation Learning for Stylistic Variation in Neural Language Models

dc.contributor.authorJohn, Vineet
dc.date.accessioned2018-08-14T14:39:34Z
dc.date.available2018-08-14T14:39:34Z
dc.date.issued2018-08-14
dc.date.submitted2018-08-09
dc.description.abstractThe neural network has proven to be an effective machine learning method over the past decade, prompting its usage for modelling language, among several other domains. However, the latent representations learned by these neural network function approximators remain uninterpretable, resulting in a new wave of research efforts to improve their explainability, without compromising on their predictive power. In this work, we tackle the problem of disentangling the latent style and content variables in a language modelling context. This involves splitting the latent representations of documents, by learning which features of a document are discriminative of its style and content, and encoding these features separately using neural network models. To achieve this, we propose a simple, yet effective approach, which incorporates auxiliary objectives: a multi-task classification objective, and dual adversarial objectives for label prediction and bag-of-words prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space, using this approach. We apply this disentangled latent representation learning method to attribute (e.g. style) transfer in natural language generation. We achieve similar content preservation scores compared to previous state-of-the-art approaches, and considerably better style-transfer strength scores. Our code is made publicly available for experiment replicability and extensibility.en
dc.identifier.urihttp://hdl.handle.net/10012/13587
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectneural networksen
dc.subjectstyle transferen
dc.subjectrepresentation learningen
dc.subjectnatural language generationen
dc.titleDisentangled Representation Learning for Stylistic Variation in Neural Language Modelsen
dc.typeMaster Thesisen
uws-etd.degreeMaster of Mathematicsen
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws.contributor.advisorVechtomova, Olga
uws.contributor.affiliation1Faculty of Mathematicsen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
john_vineet.pdf
Size:
6.26 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.08 KB
Format:
Item-specific license agreed upon to submission
Description: