Close

Teams vs. Crowds: A Field Test of the Relative Contribution of Incentives, Member Ability, and Collaboration to Crowd-Based Problem Solving Performance

Posted on by Brandon Klein

Organizations are increasingly turning to crowdsourcing to solve difficult problems. This is often driven by the desire to find the best subject matter experts, strongly incentivize them, and engage them, with as little coordination cost as possible, to pool their knowledge. A growing number of authors, however, are calling for increased collaboration in crowdsourcing settings, hoping to draw upon the advantages of teamwork observed in traditional settings. The question is how to effectively incorporate team-based collaboration in a setting that has traditionally been individual-based. We report on a large field experiment of team collaboration on an online platform, in which incentives and team membership were randomly assigned, to evaluate the influence of exogenous inputs (member skills and incentives) and emergent collaboration processes on performance of crowd-based teams. Building on advances in machine learning and complex systems, we leverage new measurement techniques to examine the content and timing of team collaboration. We find that temporal "burstiness" of team activity and the diversity of information exchanged among team members are strong predictors of performance, even when inputs such as incentives and member skills are controlled. We discuss implications for research on crowdsourcing and team collaboration