Conventional wisdom dictates that the quality of data collected in a crowdsourcing project is positively related to how knowledgeable the contributors are. Consequently, numerous crowdsourcing projects implement crowd recruitment strategies that reflect this reasoning. In this paper, we explore the effect of crowd recruitment strategies on the quality of crowdsourced data using classification theory. As these strategies are based on knowledge, we consider how a contributor’s knowledge may affect the quality of data he or she provides. We also build on previous research by considering relevant dimensions of data quality beyond accuracy and predict the effects of available recruitment strategies on these dimensions of data quality.
Building trustworthy systems where data, people, and purpose align
