Don’t “fiddle” with the crowd — ask it better questions instead

(This post originally appeared on

As the examples of successful use of crowdsourcing to address complex technical, business and social issues grow in numbers, so do the instances of failed crowdsourcing campaigns. To make crowdsourcing a widely recognized idea-generating and problem-solving tool, it’s imperative to understand the reasons for why this tool can fail or underperform.

Why do crowdsourcing campaigns fail?

In my experience, crowdsourcing campaigns fail for two major reasons. The first is using its sub-optimal version, which I call the bottom-up model of crowdsourcing. I wrote about this model and its shortcomings here, here, here, here and, most recently, here.

The second reason is the lack of understanding that the most crucial factor that defines the ultimate success or failure of any crowdsourcing campaign is the ability to properly identify, define and articulate the problem that the crowd will be asked to solve. I call it the “80:20 rule”: 80% of unsuccessful crowdsourcing campaigns I’m aware of failed because of the inability to properly formulate the question to be presented to the crowd; only 20% did so because of a poor match between the question and the crowd.

Blaming the crowd

Unfortunately, too often when a crowdsourcing campaign fails to deliver, it is the crowd that gets most of the blame. As a result, instead of improving the efficiency of the problem-definition process, organizations begin to fiddle with their crowds. For example, a recent HBR article suggests switching to “carefully selected” crowds, the ones composed of employees or suppliers (the “experts”), rather than consumers (the “amateurs”).

This is a bad advice. The idea that you can cherry-pick the best participants for your next crowdsourcing campaign has no basis. To begin with, if you go to a large external crowd–by using external innovation portals or open innovation intermediaries–selecting a perfect “sub-crowd” becomes either cost-ineffective or outright impossible. And yes, working with internal crowds of employees is a solid approach; however, the usefulness of internal crowds—as opposed to external—has its limits.

Even more importantly, the widespread belief that only people with relevant knowledge and expertise can solve your problem is plain wrong. The history of many successful crowdsourcing campaigns proves that great ideas can come from completely unexpected sources. Moreover, research shows that the likelihood of someone solving a problem actually increases with the distance between this person’s own field of expertise and the problem’s domain. However paradoxically it may sound, packing up your problem-solving team with experts—as opposed to using a large and diversified crowd of “amateurs”—will make your problem-solving process weaker, not stronger.

 How to make crowdsourcing campaigns more effective?

The want for smaller and carefully selected crowds is driven by a fear that large crowds would generate a lot of low-quality ideas, evaluating which would be a huge burden for the organization running a crowdsourcing campaign. However, there is an effective way of generating higher-quality responses with a crowd of any size: to perfect the question you’re going to ask. Providing the crowd with a list of specific, precise and, ideally, quantitative requirements that each successful submission must meet will have a dramatic positive effect on the submission quality. Besides, this will substantially decrease the burden of the proposal evaluation because low-quality submissions will be easily filtered out.

My recommendation to organizations that want to increase the efficiency of their crowdsourcing campaigns is two-fold. First, use as large and diverse crowd as you only can get and then let qualified members of the crowd to self-select based on their own assessment of the problem and their abilities. Second, master the art and the science of formulating a question that you’ll ask your crowd—by both properly defining the problem and describing a “perfect” solution to it.

Remember: the beauty of crowdsourcing is that you don’t need to look for solutions to your problem. You just post your problem online, and then a right solution will come to find you.

The image was provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , | Leave a comment

Does crowdsourcing need “rethinking”?


(This post originally appeared on Edge of Innovation)

An article in the latest issue of Harvard Business Review describes a product development study by Reto Hofstetter, Suleiman Aryobsei and Andreas Herrmann (Journal of Product Innovation Management, forthcoming). What caught my attention was the article’s title: “Rethinking Crowdsourcing.”

Why does crowdsourcing need “rethinking”?

Hofstetter and co-authors reviewed 87 crowdsourcing projects run by 18 companies on Atizo360o, a Swiss-based platform. The projects in question appear to have been typical “idea generation campaigns” that asked consumers to come up with new product development ideas. For example, in one of the campaigns, consumers were asked to propose new flavors for drinks manufactured by a Swiss soft drink company.

Each campaign analyzed by the Hofstetter team generated on average 358 responses. Because evaluating such considerable number of ideas is time- and resource-consuming, managers at the companies took advantage of the Atizo platform’s functionality allowing participants to “like” each other submissions. So now, instead of sorting through all ideas, managers could focus only on the most “likable”—at least, as a first screen.

The Hofstetter team has identified a serious flaw in the process: the apparent value of some ideas was overinflated by reciprocal “likes” by connected contributors who would prop up each other’s contributions. When submitted proposals were assessed by independent evaluators, no correlation has been found between most “likable” ideas and those that led to successful products.

Hofstetter and co-authors have concluded that “[o]neline consumer votes are unreliable indicators of actual idea quality.” The HBR article’s own verdict was even more damning: “It can be unwise to rely on the crowd.” Ouch!

To those folks who believe it’s unwise to rely on the crowd—or, worse, that “crowds are stupid”—I have a clear message: crowdsourcing doesn’t need rethinking. What needs rethinking is the way we use it. Crowdsourcing is, first and foremost, a question that you ask a crowd; the quality of the question is the most crucial factor determining the quality of the answer. You ask the crowd a smart question–you have a chance to get a smart answer. You ask the crowd a stupid question–the answer will almost certainly be stupid.

I have two specific comments on the HBR article.

  1. Crowdsourcing “ideas” is a bad idea.

Characteristically, all criticism of crowdsourcing—whether blaming it for the downfall of Quirky, choosing the wrong name for a research ship or the low-quality product ideas in the above study—is targeted against a particular “idea generation” version of it, which I call the bottom-up model of crowdsourcing. As I argued very recently, the bottom-up model has multiple flaws. One of them is that the burden of evaluating submitted ideas usually falls on business units that already have a full load of their own research projects. Faced with the need to find resources for “newcomers,” managers begin to cut corners and push the responsibility of evaluating submitted proposals back to the crowd—exactly as described above. Having only a vague (at best) understanding of what the managers really need, the crowd chooses the most “likable”—and usually very conventional or even trivial—ideas. Hardly surprising therefore is that the efficiency of “idea generation” campaigns is extremely low: barely 1-2% of the original ideas lead to eventual implementation.

There is a plausible alternative to the bottom-up approach: the top-down model. In the top-down model of crowdsourcing, the focus is on problems. These problems are identified and formulated by the managers who then ask internal or external crowds to find solutions to these problems. This approach is remarkably efficient. For example, InnoCentive, a crowdsourcing platform utilizing the top-down model, boasts up to 85% success rate of their projects.

I’m not saying that the bottom-up model has no right to exist. In innovation-mature organizations it can be successful–and I covered such a success story in the past. But for organizations that are at the very beginning of the innovation journey—and let’s face it, we’re talking about most of organizations–the top-down must be a model of choice.

  1. Voting for “ideas” is a bad idea, too.

Many folks seem to believe that they do crowdsourcing when they join online hundreds or even thousands of other folks and start exchanging “ideas” and opinions about them. (“Crowdsourcing on Facebook” is becoming a cliché.) Unfortunately, these folks confuse crowdsourcing with another problem-solving tool: brainstorming. Adding to this confusion is the fact that almost every commercially available “idea management” software provides functionalities allowing contributors to comment on each other’s ideas and vote for them.

But crowdsourcing is different from brainstorming in one important aspect: it requires independence of opinions, a feature of crowdsourcing underscored by James Surowiecki in his classic book “The Wisdom of Crowds.” When you run a crowdsourcing campaign, you should make sure that the members of your crowd, either individuals or small teams, provide their input independently of the opinions of others. It’s this aspect of crowdsourcing that results in the delivery of highly diversified, original and even unexpected solutions to the problem–as opposed to brainstorming that almost always ends up with a group reaching a consensus. That’s why I completely agree with the Hofstetter team that the number of votes is not an indicator of the quality of ideas; moreover, I believe that voting for ideas has a net negative effect on their quality.

However, as I mentioned before, managers resort to voting to relieve the pain of the evaluation process. Is there anything they can do to reduce this burden while maintaining the idea quality? Yes. Managers must start at the other side of the “crowdsourcing equation”: the question. Instead of asking crowds for open-ended suggestions (“Bring us something and we’ll tell you whether we like it or not”), managers must be very precise about what kind of ideas they’re looking for. For example, they can provide a list of specific (and, if appropriate, quantitative) requirements any successful idea must meet. Even more useful would be a request to conclude every submission with a point-by-point account how the proposed idea matches every listed requirement. This may not automatically result in increased quality of ideas, but it will undoubtedly help managers easily weed out a low-quality “noise.”

Reiterating my key point, crowdsourcing doesn’t need “rethinking.” It’s an extremely powerful problem-solving tool, but as any other tool, it requires knowledge and experience to be properly used. Those who know how to use it will succeed in harnessing the proverbial wisdom of crowds. Those who don’t, won’t. It’s this simple.

p.s. To subscribe to my monthly newsletter on crowdsourcing, go to

Image was provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , , | 2 Comments

Bring me problems, then solutions, then problems again…

Innovation managers hate the line “Don’t bring me problems, bring me solutions.” They insist that before any innovation project can begin, a thorough investigation of the underlying problems must take place; collecting solutions can only start when the problems are identified and well defined. Albert Einstein’s quote is often invoked in this context: “If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

In a recent Harvard Business Review article, Sabina Nawaz, an executive coach, suggested that organizations should ditch the “Don’t bring me problems, bring me solutions” mentality and instead encourage their employees “to bring up problems in a more productive way.” Three specific approaches were proposed: to make it safe to bring up bad news; to require problem statement instead of complaints; and to find right people to solve the problem.

Although I’m fully subscribed to the “problems first, solutions second” point of view, I nevertheless believe that a more holistic approach is needed to deal with the “problem-solution” dilemma. Instead of looking for more formalized ways of bringing up problems, managers should establish a sustained problem-solving process.

With such a process in place, the question what is more important, a problem or a solution, will simply lose its relevance. Organizations and teams will be constantly looking for problems, both old and emerging, and defining these problems in a specific and actionable way. A solution-generating phase, involving various techniques (brainstorming, co-creation with customers, internal and external crowdsourcing, etc.) will follow, with the best solutions being selected and implemented. A solved problem will be automatically replaced by the one waiting for a solution.

The existence of a sustainable portfolio of problems will extract the best from the employees. Some people are better at spotting trends and sensing troubles, whereas others excel at finding fixes; with a constant flow of problems and solutions, everyone will find his or her sweet spot.

As for managers, they may try this line: “Bring me problems, then solutions, then problems again…”

p.s. To subscribe to my monthly newsletter on crowdsourcing, go to

Image credit:

Posted in Innovation, Portfolio Management | Tagged , , , , , | Leave a comment

The numbers game

In my previous post, I argued that a popular in the corporate innovation circles belief that ideas are plentiful and cheap (“a dime a dozen”) doesn’t withstand scientific scrutiny. A joint Stanford/MIT research team has presented a wide range of empirical evidence showing that research productivity, a scientific term for a layman’s “idea,” is declining. According to the authors’ calculations, the decline rate amounts to an average of 5.3% per year and can be even higher in some areas of the economy. In other words, ideas are not plentiful; in fact, we’re experiencing a growing shortage of ideas.

There are two obvious ways to overcome this shortage of ideas. The first is to increase the number of people generating them—and this is, according to the Stanford/MIT study, exactly how the U.S. economy has dealt with the problem for the past 40+ years. However, we all understand that this approach is unsustainable in the long run.

The second approach, much more appealing from the economic and social points of view, would be to increase research productivity, i.e., to find ways to increase the amount of ideas generated by the same number of people. That’s was drew my attention to a recent Harvard Business Review article by Dylan Minor, Paul Brook and Josh Bernoff. Having analyzed data from 154 public companies, Minor and co-authors show that using “idea management systems”—pieces of software allowing to submit and evaluate ideas and keep track of them—result in companies generating more and better ideas.

The authors further argue that the most important variable predicting the ultimate success of the idea generation process is what they call the ideation rate, which they define as “the number of ideas approved by management divided by the total number of active users in the system.” Minor and co-authors go as far as to claim that higher ideation rates are correlated with a company’s growth and net income.

The authors have identified four factors that drive the ideation rate. The first three would hardly come as a surprise to any innovation practitioner. The innovation rate is higher when more people participate “in the system” and when more idea generation campaigns are held. The innovation rate is also higher when a company engages not only people who’re traditionally involved in the innovation process but also employees from “distant” departments: sales, support and manufacturing.

The fourth identified factor is “engagement.” Minor and co-authors insist that extensive feedback by other employees improves the quality of submitted ideas. (I tend to disagree: in my experience, comments by others often intimidate employees proposing non-trivial, “out-of-the-box,” ideas.)

I’m not a fan of the idea generation process in general, which I call the bottom-up model of corporate innovation and which, in my opinion, has substantial flaws. First, employees, especially at lower steps of the organizational ladder, usually have only a vague understanding of the strategic corporate goals. As a result, the ideas they submit are often completely misaligned with the company’s real needs. Second, the burden of evaluating and implementing submitted ideas usually falls on business units that already have a full load of their own research projects. To make room for “newcomers,” business units should kill existing projects, not something most companies are good at. Third, in large companies, R&D budgets for the next year are usually drafted no later than in the Q3 of the prior year. That means that new projects receive no immediate financial support and should wait for at least a few months to get funded, at which point their utility is often highly questionable–not to mention the detrimental effect this delay will have on the employee morale.

I’m not saying that the bottom-up model of innovation has no right to exist. In innovation-mature organizations it can be remarkably successful–and I covered such a case recently. But if your organization is at the very beginning of an innovation journey, using this model may be problematic.

What can organizations do to increase the efficiency of their idea generation process? I’d recommend three approaches.

  1. Define what innovation means for your organization

Each organization must clearly define what innovation means for them; doing this in the format of an Innovation Charter is usually a good idea. The definition should include the areas of desired innovations (product innovation, business model innovation or operational improvements), time horizons, target customers, the expected size of the market and so on. These parameters will serve as a “mold” that would shape the creative energy of the company’s employees into submitting ideas that really matter to the organization. Yes, the number of proposed ideas is likely to drop, yet their value will almost certainly increase.

  1. Create a pool of ad hoc innovation experts

Organizations would benefit from creating a pool of ad hoc experts that would bring together people from all departments relevant to innovation activities: R&D, manufacturing, marketing, legal, finance, etc. The names of the experts with the area of their expertise could be placed on the company’s intranet. Every employee considering submitting an idea will be able to contact an expert should the need for a specific technical or business advice arise. Again, this will result in improved quality of submitted ideas.

  1. Create a separate Innovation Fund

Organizations should establish a separate Innovation Fund to pay for projects that fall outside the regular budgeting process. The amount of money in this Fund could be adjusted annually to reflect the company’s appetite for additional projects, but it must be fixed for the current fiscal year, meaning that the Fund will not become a “rainy day fund” to cover the company’s short-term financial emergencies (as often happens to “innovation money”).

However, perhaps, organizations should avoid playing the “ideas number game” at all. To this end, I’d recommend them to take a careful look at the alternative to the bottom-up model of innovation: the top-down model (I wrote about it here and here). In my experience, the top-down model will much better serve the innovation needs of most organizations.

p.s. To subscribe to my monthly newsletter on crowdsourcing, go to

Image credit:

Posted in Innovation | Tagged , , , , | 4 Comments

Are ideas plentiful and cheap?


We often hear: ideas are cheap. “Ideas are a dime a dozen. People who implement them are priceless,” claims a 2013 article in Forbes. As a prevailing point of view has it, innovative ideas are plentiful; it’s the idea implementation that represents a bottleneck in the innovation process.

A joint Stanford/MIT research team has recently challenged the “cheap ideas” dogma. The researchers presented a wide range of empirical evidence showing that research productivity, a scientific term for a layman’s “idea,” is actually sharply declining. According to their calculations, in the economy as a whole, research productivity declines at an average rate of 5.3% per year.

Analysis of specific research areas confirms this trend. In semiconductors (the playground of the famous Moore’s Law), research productivity is declining at a rate of 6.8% per year; in agribusiness and pharmaceutical research, the annual decline is about 5.0%.

In other words, contrary to a popular belief, ideas are not plentiful. In reality, we’re experiencing a growing shortage of ideas.

If research productivity is on decline, how then is steady economic growth being sustained? The answer is simple: by rising what economists call research effort–and what in layman’s terms means the number of researchers. Indeed, the number of researchers required to achieve the famous doubling, every two years, of the density of computer chips (Moore’s Law) is more than 18 times larger today than it was in the early 1970s.

In some specific areas of agribusiness research, the number of researchers has risen 23-fold between 1969 and 2009. And while research productivity responsible for the drugs approved by the FDA between 1970 and 2015 has been declining at an annual rate of 3.5%, this decline was offset by the 6.0% annual growth in the number of involved researchers.

Given relatively high salaries of researches involved in pharmaceutical research, there is all the reasons indeed to call them “priceless.” And taking into account the steady growth in their numbers, one shouldn’t be surprised with the galloping costs of today’s drug development.

But let me turn to my favorite subject: crowdsourcing. If the increase in the number of researchers becomes a driving force to ensure steady economic growth—against the background of the falling number of innovative ideas—then crowdsourcing represents one of the approaches to facilitate this process. While keeping the total number of researchers constant, crowdsourcing allows to significantly elevate the effective number of contributors to generate “ideas” for a specific research project.

One could argue that one of the macroeconomic benefits of crowdsourcing is therefore its ability to utilize temporary assemblies of researches without the need of creating permanent research positions.

p.s. You can read the latest issue of my monthly newsletter on crowdsourcing here: To subscribe to the newsletter, go to

Image was provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , | 1 Comment

Balancing startup success and failure: how VC investors can tip the scales

Recently, I’ve come across an interesting paper, “Tolerance for Failure and Corporate Innovation,” published in 2011 by Xuan Tian of Indiana University and Tracy Yue Wang of University of Minnesota. Tian and Wang studied the relationship between venture capital (VC) investors’ attitude towards failure and the performance of startups backed by these VCs.

Tian and Wang have developed an original approach to measuring VCs’ tolerance for failure: they examined VCs’ willingness to continue investing in ventures that missed their target milestones. The idea here is that VCs have two options when dealing with an underperforming venture: either to write it off immediately or to give the entrepreneur a second chance by continuing to infuse capital into the venture. Other things equal, the longer a VC firm waits before terminating funding of underperforming ventures, the more tolerant it is for early failures in investments.

The most remarkable result of Tian and Wang’s study was that startups backed by more failure-tolerant VCs were more innovative (as judged by the number and significance of patents they filed). The authors also found that the effect of VC failure tolerance on startup innovation was much stronger when the failure risk was higher (e.g., in drug discovery) and thus failure tolerance was more needed and valued.

These findings are important given the role VC funds play in supporting the startup economy and entrepreneurship. They may also provide a clue to the growing popularity of corporate venture capital (CVC) funds, a specific subset of venture capital by which corporations invest in external startups.

In recent years, CVCs have been rapidly gaining traction. According to CB Insights, a tech market intelligence platform, the number of CVC groups making their first investment in startups in 2014 grew 28% over 2013 (and 208% over 2010); the number of existing CVC funds was expected to double in 2015. In fact, one-fifth of all venture deals in Q3 2015 included CVC participation.

Moreover, it was shown that CVC investment is particularly beneficial to startups: startups that had gone public, over the period of 1980-2004, after being funded by at least one CVC investor outperformed those funded exclusively by traditional VCs (as measured by average annual revenue growth, increase in ROA and stock price performance).

Why is that? The first reason might be that most CVCs actively work with their portfolio companies providing them with domain expertise and access to proprietary networks. One could thus argue that the industry-specific expertise delivered by the corporate teams is much more valuable to startups that the knowledge provided by the “generalists” employed by traditional VC firms.

The second reason could be rooted in the goals corporate and traditional VCs pursue when investing in startups. While traditional VCs invest capital with the sole objective of financial returns, CVCs often invest for strategic reasons, with financial return being only a secondary consideration. (In a CB Insight survey, four out of five CVCs named strategic value of working with startups as a key decision driver.) Besides, managers of CVC funds are typically compensated by a fixed salary and corporate bonuses. This may make them more tolerant to financial losses associated with investing in startups and thus more tolerant to startups’ failures.

Supporting this assertion—and pointing to the results of Tian and Wang mentioned above–is a study conducted by researchers from the Wharton’s Mack Institute for Innovation Management. They tracked the performance of biotech startups—ventures with a particularly high risk of failure–funded by both types of VCs and found that startups backed by CVCs demonstrated higher innovation output (in terms of the number of granted patents and published scientific articles) than those backed by traditional VCs.

Taken together, both the Wharton and Tian and Wang’s studies strongly suggest that the positive effect of CVC financing on startup performance, including innovation output, is due to a higher tolerance to startup failure displayed by corporate VC investors as compared to traditional VCs.

A lesson that aspiring entrepreneurs can draw from this story is this: if your venture carries elevated risk of failure, choosing a corporate investor to support it might increase your chances to succeed.

p.s. You can read the latest issue of my monthly newsletter on crowdsourcing here: To subscribe to the newsletter, go to

Image Credit:

Posted in Innovation, Startups | Tagged , , , , , | Leave a comment

A crowd inside

When you read the original (and, in my opinion, still the best) definition of crowdsourcing proposed by Jeff Howe in 2006–“the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call”—you get an impression that crowdsourcing is always something that is external to an organization.

Indeed, the focus of public attention has traditionally been on external crowdsourcing campaigns, such as NASA’s open innovation contracts or the contest launched by BP in the wake of the 2010 oil spill in the Gulf of Mexico.

Hidden from the spotlight—and almost completely ignored by the academics and business writers–is so-called internal crowdsourcing, crowdsourcing conducted within the legal boundaries of an organization that harnesses the “collective wisdom” of the organization’s own employees. Yet a growing body of business cases shows that organizations have begun successfully using “inside” crowds to solve technical and business problems (“top-down” crowdsourcing), generate new product and service ideas (“bottom-up” crowdsourcing) or forecast internal and external outcomes and trends (prediction markets).

A recent article in MIT SMR is a testament that business periodicals have finally started paying attention to internal crowdsourcing too. A joint team of academics and business executives have summarized the results of a four-year research project that studied how organizations in different industries were using internal crowdsourcing. Here, I’d like to offer some comments, based on my own experience in running internal crowdsourcing campaigns, on the authors’ observations and practical recommendations.

First of all, I completely agree with the authors that running internal crowdsourcing campaigns requires well-defined process and carefully chosen technological platform; it’s also critical to establish a flexible system of incentives, which rewards not only submitters of “good” ideas, but also employees who contributed to the ultimate success of the campaign by commenting on and refining other people’s ideas. I also fully support the authors’ emphasis on the importance of transparency with regards to the results of the completed crowdsourcing campaign and, especially, on dealing with employees whose ideas/solutions had not been selected for further development.

I, however, don’t share the authors’ tacit assertion that running “competitive” campaigns (that is, the ones with selected “winners”) hurts collaboration and, therefore, is intrinsically counterproductive. In my experience, the reward system should first and foremost reflect the existing organizational culture. For instance, rewarding only a few top individual performers sits quite well with the cultural values of many U.S. companies. In contrast, European managers often cringe at the “winner takes all” (or, as they call it, “American”) approach; they prefer to award teams instead of individuals and spread rewards among larger number of the participants.

Nor do I agree with the authors’ claim that it’s universally beneficial when submitters of the best ideas are placed in charge of their implementation—in part, as a reward for submitting them in the first place. Proposing ideas or solutions and implementing them often require different sets of skills–and not every individual possesses all of them. The decision on who will be implementing a selected proposal should be made solely based on strategic business considerations, and not dictated by the (often arbitrary) rules of the reward and recognition system.

I’d also question the authors’ recommendation to preferentially use technological platforms that facilitate shared development of solutions; I’m afraid that they’re confusing crowdsourcing with brainstorming. I’ve covered this topic in the past and will only mention here that crowdsourcing (whether internal or external) can only realize its full potential when the participants are capable of providing their input independently of each other. In this case, a crowdsourcing campaign may result in a completely unexpected, even unorthodox, solution. I’ve nothing against brainstorming—it’s a powerful problem-solving tool—but one has to remember that it almost always ends up with a consensus solution—or, worse, a solution pushed forward by a vocal minority.

The authors are absolutely right when they point out that internal crowdsourcing brings value to organizations well above the intellectual input it produces. (I was making similar point recently.) When asking employees to submit ideas and solutions, the senior management sends a message that it values their views and opinions and considers them equal partners in fulfilling organizational objectives.

I was thus surprised with the authors’ recommendation to run internal crowdsourcing campaigns while keeping anonymity of the participants. The authors base this recommendation on the notion that “[p]roviding a psychologically safe environment leads to greater employee participation and collaboration, resulting in more effective innovations.” They further argue that by hiding their organizational identity, employees will feel safer when making their contributions.

Although I fully agree (and wrote about this before) that a psychologically safe environment is crucial for innovation, the proposed anonymity defies the very objective of running internal crowdsourcing campaigns: giving the employees the sense of participation, engagement and ownership of the organization’s future. Besides, if employees feel unsafe to openly share their views with the rest of the organization, this organization has a problem, a problem that can’t be solved by just hiding someone’s identity.

The last point of disagreement has to do with the authors’ complaint that organizations often run internal crowdsourcing campaigns focused on short-term improvements. The authors believe that, quite to the contrary, organizations should “encourage employees to keep their focus on long-term opportunities.”

The authors appear to simply misinterpret what crowdsourcing is. Crowdsourcing is, first and foremost, an innovation tool—and, as such, it can be applied to a wide variety of business objectives. Some of these objectives are of short-term nature while others do represent long-term opportunities; however, all of them must be subordinated to a larger innovation strategy. (I don’t want to go too deep here, but mentioning such important concepts as 3-Horizon Model of Innovation and Integrative Innovation Management would be very relevant in this context.) Depending on a particular part of the overall innovation strategy, an appropriate innovation tool needs to be selected. Not the other way around.

p.s. You can read the latest issue of my monthly newsletter on crowdsourcing here: To subscribe to the newsletter, go to

Image was provided by Tatiana Ivanov

Posted in Internal Innovation Networks | Tagged , , , , , , , | Leave a comment