Crowdsourcing incentives, and applications for open source communities

While it’s only a single study (disclaimer, blah blah blah), Here is some interesting data to consider for everyone involved in open source projects which have a community, or would like to. Especially for corporations for whom a community is an important part of your model, and community leaders for whom a corporation is a major driver of your project.

The study isn’t directly related to community, but you should be able to make your own connections.

I will point out one result in particular, which is that “[the results] suggest that workers perform most accurately when the task design credibly links payoffs to a worker’s ability to think about the answers that their peers are likely to provide.” When I read this, my first thought was of the Linux kernel process, in which contributions generally undergo public review on mailing lists. New contributors quickly learn to think about what mailing list participants will think about their contributions. We use the same process within the Ubuntu kernel team, with public review by peers. Many other projects do as well. So is the kernel development process the same scheme, with a feedback loop wrapped around it? (i.e. you actually DO get the feedback, you don’t just think about it).

This reward scheme, called “Bayesian Truth Serum”, produced more accurate results than schemes which awarded a bonus for accuracy!

I can think of a few really simple redux statements that might be made about how this applies to community projects, but (as this blog is subtitled) I think it’s more complicated than that. I’d rather just throw this much to community leaders and let them think about it.

About Steve

I'm Steve Conklin, AI4QR I'm employed by Salesforce, on the SRE team for Heroku. Interests include Linux, open source software and hardware, electronics and music, and amateur radio.
This entry was posted in Open Source, Ubuntu and tagged , , , , . Bookmark the permalink.

Leave a comment