Show simple item record

dc.contributor.authorSzepesvari, David
dc.date.accessioned2015-10-29 19:24:03 (GMT)
dc.date.available2015-10-29 19:24:03 (GMT)
dc.date.issued2015-10-29
dc.date.submitted2015
dc.identifier.urihttp://hdl.handle.net/10012/9841
dc.description.abstractCrowdsourcing, due to its inexpensive and timely nature, has become a popular method of collecting data that is difficult for computers to generate. We focus on using this method of human computation to gather labels for classification tasks, to be used for machine learning. However, data gathered this way may be of varying quality, ranging from spam to perfect. We aim to maintain the cost-effective property of crowdsourcing, while also obtaining quality results. Towards a solution, we have multiple workers label the same problem instance, aggregating the responses into one label afterwards. We study what aggregation method to use, and what guarantees we can provide on its estimates. Different crowdsourcing models call for different techniques – we outline and organize various directions taken in the literature, and focus on the Dawid-Skene model. In this setting each instance has a true label, workers are independent, and the performance of each individual is assumed to be uniform over all instances, in the sense that she has an inherent skill that governs the probability with which she labels correctly. Her skill is unknown to us. Aggregation methods aim to find the true label of each task based solely on the labels the workers reported. We measure the performance of these methods by the probability with which the estimates they output match the true label. In practice, a popular procedure is to run the EM algorithm to find estimates of the skills and labels. However, this method is not directly guaranteed to perform well in our measure. We collect and evaluate theoretical results that bound the error of various aggregation methods, including specific variants of EM. Finally, we prove a guarantee on the error suffered by the maximum likelihood estimator, the global optima of the function that EM aims to numerically optimize.en
dc.language.isoenen
dc.publisherUniversity of Waterloo
dc.subjectStatisticsen
dc.subjectMachine Learningen
dc.subjectMaximum likelihooden
dc.subjectCrowdsourcingen
dc.titleA Statistical Analysis of the Aggregation of Crowdsourced Labelsen
dc.typeMaster Thesisen
dc.pendingfalse
dc.subject.programComputer Scienceen
uws-etd.degree.departmentComputer Science (David R. Cheriton School of)en
uws-etd.degreeMaster of Mathematicsen
uws.typeOfResourceTexten
uws.peerReviewStatusUnrevieweden
uws.scholarLevelGraduateen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages