Show simple item record

dc.contributor.authorGao, Xi Alice
dc.contributor.authorWright, James R.
dc.contributor.authorLeyton-Brown, Kevin
dc.date.accessioned2020-03-19 17:07:13 (GMT)
dc.date.available2020-03-19 17:07:13 (GMT)
dc.date.issued2019-10
dc.identifier.urihttps://doi.org/10.1016/j.artint.2019.03.004
dc.identifier.urihttp://hdl.handle.net/10012/15713
dc.descriptionThe final publication is available at Elsevier via https://doi.org/10.1016/j.artint.2019.03.004. © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/en
dc.description.abstractIn many settings, an effective way of evaluating objects of interest is to collect evaluations from dispersed individuals and to aggregate these evaluations together. Some examples are categorizing online content and evaluating student assignments via peer grading. For this data science problem, one challenge is to motivate participants to conduct such evaluations carefully and to report them honestly, particularly when doing so is costly. Existing approaches, notably peer-prediction mechanisms, can incentivize truth telling in equilibrium. However, they also give rise to equilibria in which agents do not pay the costs required to evaluate accurately, and hence fail to elicit useful information. We show that this problem is unavoidable whenever agents are able to coordinate using low-cost signals about the items being evaluated (e.g., text labels or pictures). We then consider ways of circumventing this problem by comparing agents' reports to ground truth, which is available in practice when there exist trusted evaluators—such as teaching assistants in the peer grading scenario—who can perform a limited number of unbiased (but noisy) evaluations. Of course, when such ground truth is available, a simpler approach is also possible: rewarding each agent based on agreement with ground truth with some probability, and unconditionally rewarding the agent otherwise. Surprisingly, we show that the simpler mechanism achieves stronger incentive guarantees given less access to ground truth than a large set of peer-prediction mechanisms.en
dc.description.sponsorshipXi Alice Gao was supported by a Postdoctoral Fellowship from the Natural Sciences and Engineering Research Council of Canada. Kevin Leyton-Brown was supported by a Natural Sciences and Engineering Research Council of Canada E.W.R. Steacie Fellowship, Collaborative Research and Development grant, and Discovery Grant, and by a Google Faculty Research Award.en
dc.language.isoenen
dc.publisherElsevieren
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectpeer predictionen
dc.subjectpeer gradingen
dc.subjectincentivize efforten
dc.subjectincentivize truthful reportingen
dc.subjectinformation elicitationen
dc.subjectgame theoryen
dc.titleIncentivizing evaluation with peer prediction and limited access to ground truthen
dc.typeArticleen
dcterms.bibliographicCitationX.A. Gao et al., Incentivizing Evaluation with Peer Prediction and Limited Access to Ground Truth, Artif. Intell. (2019), https://doi.org/10.1016/j.artint.2019.03.004en
uws.contributor.affiliation1Faculty of Mathematicsen
uws.contributor.affiliation2David R. Cheriton School of Computer Scienceen
uws.typeOfResourceTexten
uws.peerReviewStatusRevieweden
uws.scholarLevelPost-Doctorateen


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International

UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages