During my time at InnoCentive, there was a job I and my colleagues hated the most: collecting clients’ feedback to contributions by the members of the InnoCentive crowd.
The clients would post a problem to the InnoCentive website, and a crowd of so-called Solvers would submit solutions to the problem. According to the rules (embodied in the company’s slogan, “You pay for success, not the effort”), clients would pay only for the best contributions, the ones that really solved the problem, but reject the rest of them without pay. The only obligation they had was to provide a (brief) explanation of why each of the unsuccessful solutions was rejected.
And here where the problem began. When the clients didn’t get a solution they were looking for they apparently felt so disappointed that they didn’t want to spend any additional time and resources to prolong the “agony.” But even if they got a solution they hoped for they usually believed that by paying a prize to the winner—and a fee to InnoCentive–they had fulfilled all their major obligations. And what about the contract? Hey, are you going to sue us for a violation of a tiny clause in the contract after we’ve paid you money?
As a result, we at InnoCentive were often left with dozens of proposals, some of them quite good, without knowing what to say to the people who submitted them. I won’t go into detail on how I and my colleagues dealt with the issue. My point here is that we always knew that communicating the results to the members of the crowd was a thing of paramount importance, something that could not be ignored or forgotten.
That’s why I was so amused (however, not surprised) to read about a study by Henning Piezunka and Linus Dahlander, “Idea Rejected, Tie Formed: Organizations’ Feedback on Crowdsourced Ideas.” Piezunka and Dahlander analyzed crowdsourcing campaigns run by 70,159 organizations and found that 88% of them didn’t bother replying to submitters whose ideas were not selected. The study further showed that, not surprisingly, first-time submitters who got no response were less likely to take part in the following campaigns run by the same organizations.
I wouldn’t rush to call the innovation managers in the above organizations lazy or rude. The problem is not with them but with a model of crowdsourcing that they’re using, the one I call the bottom-up model (a.k.a. “idea generation”). I wrote about this model and its shortcomings many times, most recently here.
The major fault of this approach is that the contributors are asked to generate “ideas” whose parameters, including success criteria, are poorly defined. As a result, the bottom-up crowdsourcing campaigns end up with hundreds of half-baked proposals—and then the campaign managers have a hard time to go through all the submissions and find those that make at least some sense. No wonder the managers have no time, nor desire, to provide feedback to the 99% of the proposals they don’t want to even re-read.
There is a plausible alternative to the bottom-up approach: the top-down model, which focuses on problems. These problems are identified and formulated by the managers who then ask contributors to find solutions to these problems according to the success criteria defined up-front. This approach results in a dramatically fewer number of submissions, but their quality is remarkably higher. For example, InnoCentive, a platform that utilizes the top-down model, boasts up to 85% success rate of their projects.
And then, yes, there remain proposals (most of them in fact) that were rejected. And someone must communicate the outcome to their authors. It may sound paradoxical but even when you work with a crowd, you must communicate with its members. Always literally and often person-to-person.
The image credit: https://www.speakwithpersuasion.com/tag/getting-started/
To subscribe to my monthly newsletter on crowdsourcing, go to http://eepurl.com/cE40az