Show simple item record

dc.contributor.authorTamming, Daniel
dc.date.accessioned2020-08-12 17:14:37 (GMT)
dc.date.available2020-08-12 17:14:37 (GMT)
dc.date.issued2020-08-12
dc.date.submitted2020-08-04
dc.identifier.urihttp://hdl.handle.net/10012/16113
dc.description.abstractThanks to increases in computing power and the growing availability of large datasets, neural networks have achieved state of the art results in many natural language process- ing (NLP) and computer vision (CV) tasks. These models require a large number of training examples that are balanced between classes, but in many application areas they rely on training sets that are either small or imbalanced, or both. To address this, data augmentation has become standard practice in CV. This research is motivated by the ob- servation that, relative to CV, data augmentation is underused and understudied in NLP. Three methods of data augmentation are implemented and tested: synonym replacement, backtranslation, and contextual augmentation. Tests are conducted with two models: a Recurrent Neural Network (RNN) and Bidirectional Encoder Representations from Trans- formers (BERT). To develop learning curves and study the ability of augmentation methods to rebalance datasets, each of three binary classification datasets are made artificially small and made artificially imbalanced. The results show that these augmentation methods can offer accuracy improvements of over 1% to models with a baseline accuracy as high as 92%. On the two largest datasets, the accuracy of BERT is usually improved by either synonym replacement or backtranslation, while the accuracy of the RNN is usually im- proved by all three augmentation techniques. The augmentation techniques tend to yield the largest accuracy boost when the datasets are smallest or most imbalanced; the per- formance benefits appear to converge to 0% as the dataset becomes larger. The optimal augmentation distance, the extent to which augmented training examples tend to deviate from their original form, decreases as datasets become more balanced. The results show that data augmentation is a powerful method of improving performance when training on datasets with fewer than 10,000 training examples. The accuracy increases that they offer are reduced by recent advancements in transfer learning schemes, but they are certainly not eliminated.en
dc.language.isoenen
dc.publisherUniversity of Waterlooen
dc.titleData Augmentation For Text Classification Tasksen
dc.typeMaster Thesisen
dc.pendingfalse
uws-etd.degree.departmentDavid R. Cheriton School of Computer Scienceen
uws-etd.degree.disciplineComputer Scienceen
uws-etd.degree.grantorUniversity of Waterlooen
uws-etd.degreeMaster of Mathematicsen
uws.contributor.advisorvan Beek, Peter
uws.contributor.affiliation1Faculty of Mathematicsen
uws.published.cityWaterlooen
uws.published.countryCanadaen
uws.published.provinceOntarioen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages