Crowdsourcing: two approaches, two different outcomes

In my July 16 post, I set out to prove that crowdsourcing is a very cost-effective tool allowing solving problems at much less cost compared to other innovation tools, and, therefore, the low popularity of crowdsourcing, of which I wrote earlier, can’t be explained by its being prohibitively expensive. In this post, I’ll continue the comparison exercise and talk about effectiveness of two different crowdsourcing approaches.

I like saying that one needs only two things to run a crowdsourcing campaign: a question and a crowd. The topic of selecting and formulating a proper question to crowdsource is of immense importance, and I’ll take on this topic later. Today let me deal with crowds.

There are two principal ways to acquire a crowd: to build it from scratch (I call this approach “build-the-crowd”) or to use proprietary crowds already assembled by a number of commercially available crowdsourcing platforms (I call this approach “rent-a-crowd”). Companies build their own crowds usually by creating the so-called External Innovation Portals (or something similarly called). A typical EIP is essentially a website that invites anyone from the outside of the company to register on the site and then submit innovative ideas in the areas of the company’s corporate interests.

Examples of EIPs are numerous. They’re especially popular among consumer-oriented companies: Starbucks runs My Starbucks Idea, General Mills has G-WIN, and Clorox came up with CloroxConnect. Tech companies, including pharmaceutical and medical device, don’t stay on the sidelines, either: Medtronic invites you to Innovate with Medtronic and AkzoNobel suggests that you Enter our Open Space. Energy giant Shell lures you into its own portal modestly named Shell GameChanger.

How effective EIPs are? Unfortunately, hard numbers are difficult to come by: companies are predictably reluctant to publicly discuss the efficiency of their open innovation programs, EIPs being no exception. Yet a brave one, Dell, does provide stats on how its EIP, IdeaStorm, is performing. IdeaStorm’s front page says that all in all, over 24,948 ideas were submitted, of which 549+ ideas have been implemented. Leaving aside the vagueness of the word “implemented,” the success ratio of the project barely exceeds 2%. Not a fountain of innovative ideas for Dell, to say the very least, and as my own involvement with corporate EIPs suggests, other corporate portals aren’t doing any better.

The apparent low efficiency of EIPs stems from the way they crowdsource knowledge from the outside. Many EIPs just ask for “ideas” without clearly defining what represents a valuable idea for this particular company. (I call this approach “bottom-up” and have criticized it on a number of occasions; see, for example, here and here). As a result, a lot of irrelevant or low-quality ideas are being submitted, dramatically decreasing the signal-to-noise ratio but at the same time significantly increasing the amount of resources the company needs to allocate for the initial screening of submissions.

Some companies have recognized this shortcoming and began including descriptions of the areas where innovations are especially welcome. For example, AstraZeneca lists “R&D Focus Areas” on the front page of its OpenInnovation portal; similar descriptions can be seen on Philips’ SimplyInnovate.

Even further went Unilever. Its The Unilever Foundry portal features Challenges, which are reasonably well-defined problems in a few product/services categories. Each Challenge explains the context of the problem and describes what exactly Unilever is looking for. Challenges have a submission deadline and a budget. Unfortunately, I have no data on how successful Unilever’s Challenges are and I’d appreciate any information on this topic.

Unilever’s Challenges almost exactly adhere to the Challenge concept introduced back in 2001 by InnoCentive, an open innovation service provider specializing in crowdsourcing. Instead of asking for “ideas,” InnoCentive’s clients post well-defined technical or business problems (“Challenges”), time-bound and having an “award tag” attached to it descriptions, to the InnoCentive website. Then a huge crowd of InnoCentive “Solvers” (375,000+ from 200 countries) that InnoCentive has assembled over the years and now “rents-out” works on finding solutions to these problems. Submitted solutions are then collected by the InnoCentive staff and delivered to the client. The client has a fixed amount of time to review all the submissions and announce a “winner” who is receiving the money award.

The solution rate of the InnoCentive Challenges is very impressive. Although I failed to find the precise number on the company’s recently redesigned website, about a year ago InnoCentive claimed this value being 85%. Similar high level of success of its crowdsourcing campaigns, up to 90%, was reported by another open innovation service provider, IdeaConnection. Just compare these numbers to about 2% success rate claimed by Dell’s IdeaStorm!

The InnoCentive (and IdeaConnection) mode of crowdsourcing is an example of what I call the “top-down” approach, a process in which a well-defined problem is offered to a pool of potential solvers, after which submitted solutions are reviewed–and successful ones identified–based on a number of criteria articulated in advance. (Again, I already wrote about the benefits of the “top-down” approach: here and here).

The available numbers therefore strongly suggest that the top-down mode of crowdsourcing–from the problem to a solution–is much more effective than the bottom-up mode–from “ideas” to potential implementation. At the same time, I do see a value for a large company (and I emphasize: large) to spend time and money for creating its own online crowd–instead of paying fees to the providers of crowdsourcing platforms, like InnoCentive (IdeaConnection doesn’t charge an upfront fee). So from this point of view, the business model adopted by Unilever with its Challenges, a combination of the top-down approach plus its own crowd of solvers, looks to me as an optimal way to conduct corporate crowdsourcing campaigns.

Image credit: http://www.huffingtonpost.ca/ashley-redmond/gap-year-canada_b_5948708.html

About Eugene Ivanov

Eugene Ivanov is a business and technical writer interested in innovation and technology. He focuses on factors defining human creativity and socioeconomic conditions affecting corporate innovation.
This entry was posted in Crowdsourcing and tagged , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

6 Responses to Crowdsourcing: two approaches, two different outcomes

  1. Eugene,

    If you are looking for data on this, ask DARPA. They have been issuing “technology grand chalanges” for a long time. at least a decade (IDK). Furthermore, their challanges come with a reliable prize, usually $1million. Since they are a government agency they should have all sorts of statistics, available to you with a “simple” FIOA request. l

  2. Pingback: Don’t “fiddle” with the crowd — ask it better questions instead |

  3. Pingback: What is crowdsourcing? |

  4. Pingback: Don’t blame crowdsourcing for “bad ideas” |

Leave a comment