I try to follow what academic researchers have to say about crowdsourcing. As a crowdsourcing practitioner, I welcome the clarity, holistic approach and intellectual vigor that academic research brings to the table.
But not always. Take, for example, a recent Harvard Business Review article by Linus Dahlander and Henning Piezunka, “Why Some Crowdsourcing Efforts Work and Others Don’t.” Based on the data presented in their 2014 research paper, Dahlander and Piezunka set out to explain “why some organizations succeed to attract crowds and others fail.”
I have to say that the very choice of examples of successful and failed crowdsourcing campaigns Dahlander and Piezunka mention in their HBR paper is already confusing. They don’t seem to understand the difference between different crowdsourcing venues: open innovation competitions (Netflix’ 2009 $1 million prize), external innovation portals (Starbucks’ MyStarbucksIdea.com), open innovation intermediaries (NASA’s cosmic rays challenge) and online surveys (the German Pirate Party initiative). Sure, all four represent practical ways of using crowdsourcing, but the rules and conditions driving their success or failure are so different that throwing them into the same mix reminds me of the proverbial comparison of apples and oranges.
The confusion doesn’t go away when theauthors turn to practical recommendations on actions that managers could take to ensure success of their crowdsourcing campaigns. First, Dahlander and Piezunka suggest that instead of waiting for external contributions, organizations should first present their own ideas and then invite crowds to discuss these ideas. Dahlander and Piezunka argue that by showing to the crowds the types of ideas the organizations are interested in they motivate potential contributors to submit their own ideas.
Second, Dahlander and Piezunka insist that organizations should publicly discuss already submitted ideas and indicate which of them are most valuable. The authors argue that by showing to the current and future contributors their interest, organizations will further energize the submission process, ultimately resulting in the larger number of submitted ideas.
I see two major problems with Dahlander’s and Piezunka’s piece. To begin with, the authors seem to define the success of a crowdsourcing campaign predominantly by the size of the assembled crowd (“why some organizations succeed to attract crowds and others fail.”). This is completely misleading. Organizations turn to crowdsourcing to solve problems, and the only proof of success that really matters is whether the problem was solved or not. Of course, the size of the crowd plays an important role – in general, the larger the crowd, the higher the chance of solving the problem – by as any crowdsourcing practitioner would tell you, the major prerequisite for success is the ability to correctly define and articulate the problem you’re trying to solve by. If the problem is defined correctly, it could be solved even by a small crowd (I know such cases); if it’s defined incorrectly – so that you’re solving the wrong problem – no crowd, regardless of its size, will help (and I know such cases, too). By the way, are Dahlander and Piezunka aware of the fact that the winning solution for the Netflix $1 million prize, the one among 44,000(!) submitted, was never implemented? You call it success? I don’t.
Even more troubling, Dahlander and Piezunkaappear to confuse crowdsourcing with brainstorming. It’s brainstorming, not crowdsourcing, that requires throwing in an idea and asking other folks to comment on it – and it doesn’t matter whether you’re asking a few people in the room or a few thousands online. Now, I have nothing against brainstorming; yet one has to realize that by virtue of the collective discussion it implies, brainstorming almost always ends up with a consensus decision – or, worse, with a decision pushed forward by a vocal minority.
In contrast, when you crowdsource, you start with a problem you want to solve and then
provide your crowd with a clear sense of how a successful solution will be selected. Then you let the crowd do the job. Being unburdened with your prior “ideas”, the crowd will come with solutions that have only one goal in mind: to solve your problem. That’s why so often crowdsourcing campaigns deliver completely unexpected, even unorthodox, solutions.