Approval Voting and Incentives in Crowdsourcing

Nihar Shah, Dengyong Zhou, Yuval Peres
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:10-19, 2015.

Abstract

The growing need for labeled training data has made crowdsourcing an important part of machine learning. The quality of crowdsourced labels is, however, adversely affected by three factors: (1) the workers are not experts; (2) the incentives of the workers are not aligned with those of the requesters; and (3) the interface does not allow workers to convey their knowledge accurately, by forcing them to make a single choice among a set of options. In this paper, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer, and coupling it with a ("strictly proper") incentive-compatible compensation mechanism. We show rigorous theoretical guarantees of optimality of our mechanism together with a simple axiomatic characterization. We also conduct preliminary empirical studies on Amazon Mechanical Turk which validate our approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-shaha15, title = {Approval Voting and Incentives in Crowdsourcing}, author = {Shah, Nihar and Zhou, Dengyong and Peres, Yuval}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {10--19}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/shaha15.pdf}, url = {https://proceedings.mlr.press/v37/shaha15.html}, abstract = {The growing need for labeled training data has made crowdsourcing an important part of machine learning. The quality of crowdsourced labels is, however, adversely affected by three factors: (1) the workers are not experts; (2) the incentives of the workers are not aligned with those of the requesters; and (3) the interface does not allow workers to convey their knowledge accurately, by forcing them to make a single choice among a set of options. In this paper, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer, and coupling it with a ("strictly proper") incentive-compatible compensation mechanism. We show rigorous theoretical guarantees of optimality of our mechanism together with a simple axiomatic characterization. We also conduct preliminary empirical studies on Amazon Mechanical Turk which validate our approach.} }
Endnote
%0 Conference Paper %T Approval Voting and Incentives in Crowdsourcing %A Nihar Shah %A Dengyong Zhou %A Yuval Peres %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-shaha15 %I PMLR %P 10--19 %U https://proceedings.mlr.press/v37/shaha15.html %V 37 %X The growing need for labeled training data has made crowdsourcing an important part of machine learning. The quality of crowdsourced labels is, however, adversely affected by three factors: (1) the workers are not experts; (2) the incentives of the workers are not aligned with those of the requesters; and (3) the interface does not allow workers to convey their knowledge accurately, by forcing them to make a single choice among a set of options. In this paper, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer, and coupling it with a ("strictly proper") incentive-compatible compensation mechanism. We show rigorous theoretical guarantees of optimality of our mechanism together with a simple axiomatic characterization. We also conduct preliminary empirical studies on Amazon Mechanical Turk which validate our approach.
RIS
TY - CPAPER TI - Approval Voting and Incentives in Crowdsourcing AU - Nihar Shah AU - Dengyong Zhou AU - Yuval Peres BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-shaha15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 10 EP - 19 L1 - http://proceedings.mlr.press/v37/shaha15.pdf UR - https://proceedings.mlr.press/v37/shaha15.html AB - The growing need for labeled training data has made crowdsourcing an important part of machine learning. The quality of crowdsourced labels is, however, adversely affected by three factors: (1) the workers are not experts; (2) the incentives of the workers are not aligned with those of the requesters; and (3) the interface does not allow workers to convey their knowledge accurately, by forcing them to make a single choice among a set of options. In this paper, we address these issues by introducing approval voting to utilize the expertise of workers who have partial knowledge of the true answer, and coupling it with a ("strictly proper") incentive-compatible compensation mechanism. We show rigorous theoretical guarantees of optimality of our mechanism together with a simple axiomatic characterization. We also conduct preliminary empirical studies on Amazon Mechanical Turk which validate our approach. ER -
APA
Shah, N., Zhou, D. & Peres, Y.. (2015). Approval Voting and Incentives in Crowdsourcing. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:10-19 Available from https://proceedings.mlr.press/v37/shaha15.html.

Related Material