Abstract
Crowdsourcing is a promising solution to problems that are difficult for computers, but relatively easy for humans. One of the biggest challenges in crowdsourcing is quality control, since high quality results cannot be expected from crowdworkers who are not necessarily very capable or motivated. Several statistical crowdsourcing quality control methods for binary and multinomial questions have been proposed. In this paper, we consider tasks where crowdworkers are asked to arrange multiple items in the correct order. We propose a probabilistic generative model of crowd answers by extending a distance-based order model to incorporate worker ability, and propose an efficient estimation algorithm. Experiments using real crowdsourced datasets show the advantage of the proposed method over a baseline method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bernstein, M., Little, G., Miller, R., Hartmann, B., Ackerman, M., Karger, D., Crowell, D., Panovich, K.: Soylent: A word processor with a crowd inside. In: Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology, UIST (2010)
Bigham, J., Jayant, C., Ji, H., Little, G., Miller, A., Miller, R., Miller, R., Tatarowicz, A., White, B., White, S., et al.: VizWiz: Nearly real-time answers to visual questions. In: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST (2010)
Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP (2008)
Sorokin, A., Forsyth, D.: Utility data annotation with Amazon Mechanical Turk. In: Proceedings of the 1st IEEE Workshop on Internet Vision (2008)
Sheng, V.S., Provost, F., Ipeirotis, P.G.: Get another label? improving data quality and data mining using multiple, noisy labelers. In: Proceeding of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD (2008)
Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society. Series C (Applied Statics) 28(1), 20–28 (1979)
Whitehill, J., Ruvolo, P., Wu, T., Bergsma, J., Movellan, J.: Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Processing Systems, vol. 22 (2009)
Welinder, P., Branson, S., Belongie, S., Perona, P.: The multidimensional wisdom of crowds. In: Advances in Neural Information Processing Systems, vol. 23 (2010)
Lin, C., Mausam, M., Weld, D.: Crowdsourcing control: Moving beyond multiple choice. In: Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, UAI (2012)
Zhang, H., Law, E., Miller, R., Gajos, K., Parkes, D., Horvitz, E.: Human computation tasks with global constraints. In: Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems, CHI (2012)
Marden, J.I.: Analyzing and Modeling Rank Data, vol. 64. CRC Press (1995)
Mallows, C.L.: Non-null ranking models. I. Biometrika 44, 114–130 (1957)
Chen, X., Bennett, P.N., Collins-Thompson, K., Horvitz, E.: Pairwise ranking aggregation in a crowdsourced setting. In: Proceedings of the 6th ACM International Conference on Web Search and Data Mining, WSDM (2013)
Chang, P.C., Toutanova, K.: A discriminative syntactic word order model for machine translation. In: Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL (2007)
Eickhoff, C., de Vries, A.: How crowdsourcable is your task? In: Proceedings of the Workshop on Crowdsourcing for Search and Data Mining, CSDM (2011)
Smyth, P., Fayyad, U., Burl, M., Perona, P., Baldi, P.: Inferring ground truth from subjective labelling of venus images. In: Advances in Neural Information Processing Systems, vol. 7 (1995)
Wu, O., Hu, W., Gao, J.: Learning to rank under multiple annotators. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), pp. 1571–1576 (2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Matsui, T., Baba, Y., Kamishima, T., Kashima, H. (2014). Crowdordering. In: Tseng, V.S., Ho, T.B., Zhou, ZH., Chen, A.L.P., Kao, HY. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2014. Lecture Notes in Computer Science(), vol 8444. Springer, Cham. https://doi.org/10.1007/978-3-319-06605-9_28
Download citation
DOI: https://doi.org/10.1007/978-3-319-06605-9_28
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-06604-2
Online ISBN: 978-3-319-06605-9
eBook Packages: Computer ScienceComputer Science (R0)