As I mentioned a couple of years ago, I try to follow what academic researchers write about crowdsourcing. As a crowdsourcing practitioner, I welcome the clarity, holistic approach, and intellectual vigor academic research brings to the table. On occasion, however, I come across a paper that instead of clarifying the crowdsourcing water, muddies it.
Unfortunately, a recent HBR article (“Why Crowdsourcing Often Leads to Bad Ideas”) by Oguz A. Acar of the University of London’s Cass Business School falls in the latter category. Prof. Acar complains – and not unreasonably, I must add – that “most crowdsourcing initiatives end up with an overwhelming amount of useless ideas.” Prof. Acar believes he has identified a reason for this tsunami of “bad ideas”: our inattention to what motivates the crowd producing these ideas.
To make his point, Prof. Acar studied the motivation of the crowd members of InnoCentive, a popular crowdsourcing platform. He found that top-quality solutions usually come from crowd members driven by intrinsic and extrinsic motivation, whereas learning and social motivation have no positive effects on the quality of solutions. Prof. Acar then recommends that in order to improve the quality of submitted ideas, crowdsourcing campaigns should be designed in a way that would encourage the participation of crowd members with intrinsic and extrinsic motivations.
This is where I agree with Prof. Acar: crowd motivation does matter. I wrote about that back in 2014, and the study that Doug Williams and I conducted the same year added some field data to this conclusion.
My problem with Prof. Acar’s reasoning is that he seems to ignore the fact that the term “crowdsourcing” may mean many different things – and one mustn’t confuse them. In a nutshell, crowdsourcing consists of two major types of activities: adding capacity (“microtasking”) and accessing expertise (“crowdsourced innovation”). The second type, accessing expertise crowdsourcing, can be further divided into idea generation and problem-solving, which I propose calling the “bottom-up” and “top-down” crowdsourcing, respectively (and wrote about benefits and drawbacks of both here and here).
It’s the idea generation (“bottom-up”) version of crowdsourcing that routinely produces a large number of useless ideas. But InnoCentive, the platform Prof. Acar invoked, is the problem-solving (“top-down”), not idea generation platform. Learning what motivates people trying to solve precisely defined technical and business problems and applying this knowledge to the motivation of folks asked to generate vaguely defined “ideas” doesn’t make much sense.
But Prof. Acar’s arguments are also misleading for another, much more important reason. As other academic researchers repeatedly pointed out (see, for example, here) – and most crowdsourcing practitioners would confirm – the key factor defining the ultimate success or failure of a crowdsourcing campaign is not a crowd (its size, composition, motivation, etc.), but the question this crowd is presented with.
I call it the “80:20 rule”: 80% of unsuccessful crowdsourcing campaigns I’m aware of have failed because of the inability to properly formulate the question to be presented to a crowd; only 20% have done so because of the poor performance of the crowd.
Crowdsourcing campaigns mentioned by Prof. Acar generate a lot of useless ideas not because the crowd was badly motivated, but because the parameters of “ideas” were poorly defined by the campaign managers (“crowdmasters”). So, my recommendation to them would be this: master first the art and science of formulating a question that you’ll ask your crowd—by both properly defining the problem and describing a “perfect” solution to it.
And if the outcome of the crowdsourcing campaign falls below your expectations, don’t automatically blame the crowd – or crowdsourcing in general. Start with looking closer to yourself.
Pingback: Opening open innovation toolbox |