UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Prompt-tuning in Controlled Dialogue Generation

dc.contributor.authorLiu, Runcheng
dc.date.accessioned2022-12-22T20:03:58Z
dc.date.available2022-12-22T20:03:58Z
dc.date.issued2022-12-22
dc.date.submitted2022-12-11
dc.description.abstractRecent years have witnessed a prosperous development of dialogue response generation since the advent of Transformer. Fine-tuning pretrained language models for different downstream tasks has become the dominant paradigm in Natural Language Processing (NLP). However, fine-tuning requires storing a full copy of parameter states for every task, which is memory-consuming and expensive to serve when working with large-scale models with billions of parameters like GPT-3. Meanwhile, prompt-tuning has become an increasingly popular parameter-efficient method for steering large pretrained language models to various tasks. Most of the prompting techniques are applied in language understanding and assuming fixed prompts for all data samples within a task. Therefore, there arises an urgent need to exploit the ability of prompt-tuning in open-domain dialogue generation where data samples may vary greatly within a task. In this thesis, we present a novel, instance-specific prompt-tuning algorithm for dialogue generation. Specifically, we generate prompts based on instance-level control code, rather than the conversation context, to explore their impact on controlled dialogue generation. Experiments on popular open-domain dialogue datasets, evaluated with both automated metrics and human evaluation, demonstrate that our method is superior to prompting baselines as well as other lightweight controlled generation methods, and comparable to fine-tuning with less than 10% of total parameters.en
dc.identifier.urihttp://hdl.handle.net/10012/18993
dc.language.isoenen
dc.pendingfalse
dc.publisherUniversity of Waterlooen
dc.subjectmachine learningen
dc.subjectnatural language processingen
dc.subjectcontrolled dialogue generationen
dc.subjectparameter efficient fine-tuningen
dc.titlePrompt-tuning in Controlled Dialogue Generationen
dc.typeMaster Thesisen
uws-etd.degreeMaster of Mathematicsen
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.embargo.terms0en
uws.contributor.advisorPoupart, Pascal
uws.contributor.affiliation1Faculty of Mathematicsen
uws.peerReviewStatusUnrevieweden
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.scholarLevelGraduateen
uws.typeOfResourceTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liu_Runcheng.pdf
Size:
2.44 MB
Format:
Adobe Portable Document Format
Description:
Runcheng Liu's thesis paper
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
6.4 KB
Format:
Item-specific license agreed upon to submission
Description: