UWSpace is currently experiencing technical difficulties resulting from its recent migration to a new version of its software. These technical issues are not affecting the submission and browse features of the site. UWaterloo community members may continue submitting items to UWSpace. We apologize for the inconvenience, and are actively working to resolve these technical issues.
 

Incentivizing evaluation with peer prediction and limited access to ground truth

dc.contributor.authorGao, Xi Alice
dc.contributor.authorWright, James R.
dc.contributor.authorLeyton-Brown, Kevin
dc.date.accessioned2020-03-19T17:07:13Z
dc.date.available2020-03-19T17:07:13Z
dc.date.issued2019-10
dc.descriptionThe final publication is available at Elsevier via https://doi.org/10.1016/j.artint.2019.03.004. © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/en
dc.description.abstractIn many settings, an effective way of evaluating objects of interest is to collect evaluations from dispersed individuals and to aggregate these evaluations together. Some examples are categorizing online content and evaluating student assignments via peer grading. For this data science problem, one challenge is to motivate participants to conduct such evaluations carefully and to report them honestly, particularly when doing so is costly. Existing approaches, notably peer-prediction mechanisms, can incentivize truth telling in equilibrium. However, they also give rise to equilibria in which agents do not pay the costs required to evaluate accurately, and hence fail to elicit useful information. We show that this problem is unavoidable whenever agents are able to coordinate using low-cost signals about the items being evaluated (e.g., text labels or pictures). We then consider ways of circumventing this problem by comparing agents' reports to ground truth, which is available in practice when there exist trusted evaluators—such as teaching assistants in the peer grading scenario—who can perform a limited number of unbiased (but noisy) evaluations. Of course, when such ground truth is available, a simpler approach is also possible: rewarding each agent based on agreement with ground truth with some probability, and unconditionally rewarding the agent otherwise. Surprisingly, we show that the simpler mechanism achieves stronger incentive guarantees given less access to ground truth than a large set of peer-prediction mechanisms.en
dc.description.sponsorshipXi Alice Gao was supported by a Postdoctoral Fellowship from the Natural Sciences and Engineering Research Council of Canada. Kevin Leyton-Brown was supported by a Natural Sciences and Engineering Research Council of Canada E.W.R. Steacie Fellowship, Collaborative Research and Development grant, and Discovery Grant, and by a Google Faculty Research Award.en
dc.identifier.urihttps://doi.org/10.1016/j.artint.2019.03.004
dc.identifier.urihttp://hdl.handle.net/10012/15713
dc.language.isoenen
dc.publisherElsevieren
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectpeer predictionen
dc.subjectpeer gradingen
dc.subjectincentivize efforten
dc.subjectincentivize truthful reportingen
dc.subjectinformation elicitationen
dc.subjectgame theoryen
dc.titleIncentivizing evaluation with peer prediction and limited access to ground truthen
dc.typeArticleen
dcterms.bibliographicCitationX.A. Gao et al., Incentivizing Evaluation with Peer Prediction and Limited Access to Ground Truth, Artif. Intell. (2019), https://doi.org/10.1016/j.artint.2019.03.004en
uws.contributor.affiliation1Faculty of Mathematicsen
uws.contributor.affiliation2David R. Cheriton School of Computer Scienceen
uws.peerReviewStatusRevieweden
uws.scholarLevelPost-Doctorateen
uws.typeOfResourceTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
xi_alice_gao.pdf
Size:
622.39 KB
Format:
Adobe Portable Document Format
Description:
Accepted manuscript
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.47 KB
Format:
Item-specific license agreed upon to submission
Description: