Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem

Ittai Abraham, Omar Alonso, Vasilis Kandylas, Aleksandrs Slivkins
Proceedings of the 26th Annual Conference on Learning Theory, PMLR 30:882-910, 2013.

Abstract

Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results.We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the “bandit survey problem”. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations.Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.

Cite this Paper


BibTeX
@InProceedings{pmlr-v30-Abraham13, title = {Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem}, author = {Abraham, Ittai and Alonso, Omar and Kandylas, Vasilis and Slivkins, Aleksandrs}, booktitle = {Proceedings of the 26th Annual Conference on Learning Theory}, pages = {882--910}, year = {2013}, editor = {Shalev-Shwartz, Shai and Steinwart, Ingo}, volume = {30}, series = {Proceedings of Machine Learning Research}, address = {Princeton, NJ, USA}, month = {12--14 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v30/Abraham13.pdf}, url = {https://proceedings.mlr.press/v30/Abraham13.html}, abstract = {Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results.We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the “bandit survey problem”. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations.Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.} }
Endnote
%0 Conference Paper %T Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem %A Ittai Abraham %A Omar Alonso %A Vasilis Kandylas %A Aleksandrs Slivkins %B Proceedings of the 26th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2013 %E Shai Shalev-Shwartz %E Ingo Steinwart %F pmlr-v30-Abraham13 %I PMLR %P 882--910 %U https://proceedings.mlr.press/v30/Abraham13.html %V 30 %X Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results.We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the “bandit survey problem”. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations.Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.
RIS
TY - CPAPER TI - Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem AU - Ittai Abraham AU - Omar Alonso AU - Vasilis Kandylas AU - Aleksandrs Slivkins BT - Proceedings of the 26th Annual Conference on Learning Theory DA - 2013/06/13 ED - Shai Shalev-Shwartz ED - Ingo Steinwart ID - pmlr-v30-Abraham13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 30 SP - 882 EP - 910 L1 - http://proceedings.mlr.press/v30/Abraham13.pdf UR - https://proceedings.mlr.press/v30/Abraham13.html AB - Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results.We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the “bandit survey problem”. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations.Our approach is based in our experience conducting relevance evaluation for a large commercial search engine. ER -
APA
Abraham, I., Alonso, O., Kandylas, V. & Slivkins, A.. (2013). Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem. Proceedings of the 26th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 30:882-910 Available from https://proceedings.mlr.press/v30/Abraham13.html.

Related Material