Crowdsourcing: adding more diversity to your innovation process

crowdI think that today only a very stubborn few would deny a positive role that diversity plays in the marketplace. Studies abound pointing to better performance of companies promoting diversity in their ranks. For example, a 2015 McKinsey report on 366 public companies in the U.S., Canada, U.K., Brazil, Mexico and Chile found a statistically significant correlation between the number of women and minorities in companies’ upper ranks and their financial achievements.  In particular, in the U.S., for every 10 percent increase in racial and ethnic diversity on the executive team, there was a 0.8 percent rise in EBIT (earnings before interest and taxes).

Not surprisingly, diversity also positively affects corporate innovation process. Of course, the very idea that a group of people with diverse professional experiences would be better than a homogeneous group at solving complex problems is nothing new. However, a growing body of evidence (summarized in a 2014 article in Scientific American) now shows that socially diverse groups (that is, those with a diversity of race, ethnicity, gender and sexual orientation) are more innovative than socially homogeneous groups.

A couple of examples to illustrate this point. A study of 4,277 companies in Spain showed that the more women they had on staff, the more likely they were to introduce innovations into the marketplace over a two-year period. Similarly, data collected on 7,615 companies in London, U.K. demonstrated that businesses run by culturally diverse leadership teams were more likely to develop new products than those with homogeneous leadership.

It thus appears that non-homogeneous groups are simply smarter. Why? Scientific evidence summarized in a recent Harvard Business Review article provides an explanation. To begin with, diverse teams were more likely to constantly reexamine facts, question assumptions and scrutinize each member’s actions. Also, diverse teams processed information more carefully and were better than homogeneous groups at overcoming individual cognitive biases of its members.

True, diversity does increase tensions or even conflicts within non-homogeneous groups, although, perhaps, not as much as people routinely think. And yet, the higher performance of heterogeneous groups significantly overweights the risks of potential conflicts, especially if the group leaders are properly trained in conflict resolution.

The above studies represent good news for HR managers in charge of building innovation teams: in our rapidly globalizing workforce environment, finding people with diverse professional, personal and social attributes is becoming increasingly easier.

However, there is another way for organizations to reap the innovation benefits of diversity: to use crowdsourcing.

The availability of established crowdsourcing venues (such as InnoCentive, IdeaConnection and NineSigma) allows access to literally tens of thousands of qualified people capable of solving the most complex technical and business problems. Spread over 200+ countries and composed of individuals from every imaginable ethnic, religious or professional group, this “crowd” represents the most diverse innovation team one can only imagine.

There is one aspect of crowdsourcing that makes it even more attractive than working with fixed innovation teams. The way a team operates implies that no matter how many different opinions were proposed during the course of the project, the final decision will be a sort of consensus, which is not necessarily the best solution.  In contrast, a bona fide crowdsourcing campaign makes sure that members of your crowd provide their input independently of the opinion of others. It is this aspect of crowdsourcing that was proven time and again to result in the delivery of highly original and often unexpected solutions to the problem.

The bottom line: using crowdsourcing will add diversity to your innovation process.

Posted in Crowdsourcing | Tagged , , , , , , , , , , , , | 2 Comments

Should we blame crowdsourcing for Quirky’s downfall?

71wuc9aj8l-_sl1500_In December 2015 issue of Harvard Business Review, Sebastian Fixson and Tucker Marion attempted to figure out what went wrong with Quirky, a collaborative-invention platform that connects creative individuals with consumer product companies. Launched in 2009 and hailed as a new frontier in product development, Quirky later flew into zone of turbulence and went bankrupt in 2015. It reopened operations in May 2016.

Fixson and Marion outlined four major reasons of Quirky’s downfall, of which I wholeheartedly agree with three. First, they argued that there was a split between the flow of new product ideas and the company’s execution capabilities. For example, in 2014 Quirky was selling products in over 26 different product categories, which made quality control almost impossible.

Second, and connected with the first, Quirky offered products across a broad range of categories with a high product turnover. This made it difficult to establish a Quirky brand, which in turn complicated selling Quirky’s products to large retailers, such as Target and Walmart.

Third, by fostering close relationship with its user community, Quirky has dramatically democratized its decision-making process. While not necessarily bad when it comes to product development, such a decision-by-committee turned to be too cumbersome when the company needed to comply with demanding requirements of one of its major partner, GE Appliances.

Fourth, the authors argued that Quirky had produced a lot of good products, but few exceptional (“breakthrough”). Also, again, I agree with this particular statement, I’d nevertheless challenge their assertion that the absence of radically new products was due to the fact that the members of the Quirky community had no prior product development experience – and therefore had to restrict themselves to mere incremental improvements of existing consumer products.

I beg to disagree. When it comes to open innovation – and crowdsourcing in particular – the members of the “crowd” deliver only what they were asked to deliver. It’s the scope of the question that defines the scope of the answer. By making its product development process totally dependent on a free flow of incoming ideas and not pressing its community for more, Quirky never asked its users to deliver anything approaching a breakthrough. What reason then did the community have to deliver a breakthrough?

For the same reason, I don’t like the article’s title: “A case of crowdsourcing gone wrong.” Crowdsourcing (and open innovation in general) is not a business model, as Fixson and Marion are tacitly implying; it’s a tool. Quirky has been using this tool at the front end of its innovation process, but has committed a number of mistakes, mostly downstream the process and completely unrelated to crowdsourcing itself. It is not crowdsourcing that has gone wrong; Quirky has. Let’s thus not blame crowdsourcing for Quirky’s downfall.

Image credit:

Posted in Crowdsourcing | Tagged , , , , , , , , | Leave a comment

Matching crowdsourcing to specific stages of business model innovation

(The original version of this piece was posted to the Qmarkets blog)

I like to argue (for example, here) that one of the major reasons crowdsourcing has not yet become a mainstream innovation tool is a paralyzing uncertainty over the question which technical or business problem can be successfully solved by using this approach. In this piece, I want to shed some light on the issue by proposing a simple model that matches crowdsourcing to specific types of innovation.

When speaking about types of innovation, I’ll employ definitions put forward by Christensen, Bartman and van Bever in their recent article, “The hard truth about business model innovation.” Christensen and coauthors outline three major types of innovation, each corresponding to a specific stage of business model development:

  1. Market-creating innovations (“Creation,” in the authors’ parlance). This type of innovation takes place at the first stage of business model formation when firms search for a meaningful value proposition, e., product or service that would fulfil unmet customer needs (“job to be done”).
  2. Sustaining innovations. This type of innovation, which is an improvement in the firm’s core offerings, involves the creation of better products or services that can be sold for higher prices to existing customers.
  3. Efficiency innovations. This type of innovation kicks in when sustaining innovations no longer generate additional profitability and the firms turns to cost reduction, mainly by optimizing internal processes.

When speaking about crowdsourcing, I’ll be talking about two forms of it:

  • External crowdsourcing (EC). To describe EC, I’ll borrow its “classic” 2006 definition by Jeff Howe as an act of taking a job traditionally performed inside the firm and outsourcing it to a large (and usually undefined) group of people in the form of an open call. Firms can run EC campaigns by either employing open innovation intermediaries (such as InnoCentive, NineSigma and IdeaConnection) or by building external innovation portals, the last venue best represented by the Unilever Foundry.

Internal Crowdsourcing (IC).  IC is essentially the same as EC, but conducted within the legal boundaries of the firm by tapping on the “collective wisdom” of its own employees. Although IC can take a variety of different forms, the best venue is through creating corporate innovation networks (CINs). (I wrote about multiple benefits of CINs in the past; see, for example, here.)


External crowdsourcing is mostly efficient when applied to market-creating innovations. When firms are designing products and services to fulfil newly discovered customer needs, they can greatly benefit from collecting input from an external “crowd,” a crowd whose size and diversity would vastly exceed the firms’ internal resources. Besides, EC often brings original, sometimes even unorthodox, solutions that are unlikely to be developed by internal R&D teams.

EC can still be useful when applied to sustaining innovations, but only if the firm is able to clearly articulate the deficiencies of its current offerings and provide a precise and specific description of what a meaningful improvement in these offerings should be. External crowds can then come up with solutions that in many cases could be superior to those developed in-house.

However, EC becomes almost useless when applied to efficiency innovations. The reason is not that crowds are inherently incapable of addressing operational, including cost reduction, challenges; the problem, rather, is in the lack of appropriate information. When looking for help in designing new products and services, firms are usually willing to divulge enough relevant information for external crowds to understand the job at hand. Improving internal processes is a different matter, though. Internal processes are so inherently entangled in corporate operations that just explaining their nuts and bolts to a “stranger” represents a formidable task. Besides, firms often consider their internal processes confidential and, for this reason, are unwilling to provide essential details. Faced with a lack of pertinent information (“Improve this process, but we can’t tell you what this process is all about”), crowds inevitably fail.

In contrast, internal crowdsourcing is the least efficient when applied to market-creating innovations: composed mostly of like-minded individuals sharing the same or very similar background and experience, internal R&D teams often lack the critical mass of diversity necessary for coming up with truly innovative product or service ideas. The usefulness of IC increases, though, when applied to sustaining innovations because internal crowds can more rapidly than external amalgamate customer feedback provided by marketing and sales with deep understanding of the firm’s product development and manufacturing capacities.

But it is efficiency innovations where IC can be at its best. It’s fair to say that no one in the firm, including members of its management, can better grasp the complexity of the people-to-people interactions internal operations are composed of than the people themselves. When presented with a carefully prepared and articulated questions asking for improvements of internal processes, internal crowds usually respond with a large number of suggestions. None of them, taken separately, can solve the problem entirely; however, when stitched together, as pieces of a puzzle, they may create a “perfect” solution that can’t be developed by any other means.

In summary, when turning to crowdsourcing, firms should create capabilities for both internal and external crowdsourcing (perhaps, in this exact order). When these tools are available, careful matching should take place for each tool to be applied to the type of innovation this tool is the most suitable for.


Posted in Crowdsourcing | Tagged , , , , , , , , , , , , , , , , , , , , | 1 Comment

Open Innovation = Co-Creation + Crowdsourcing

A good friend of mine Michael Docherty, the founder of the consulting and new ventures firm Venture 2 Inc. (and author of the highly-acclaimed book “Collective Disruption”), was interviewed recently by IdeaConnection’s Paul Arnold. (By way of shameless self-promotion, here is my interview with Paul Arnold.)

I’m not going to get into detail of what Michael has said – one really has to read the interview in full – but I’d like to draw your attention to the last paragraph where Michael is trying to define “open innovation” and seems to be setting it apart from crowdsourcing.

If Michael did mean that, I would respectfully disagree with him here, for I consider crowdsourcing part of open innovation.

The “openness” of crowdsourcing is implied in its very definition given by Jeff Howe in 2006: “[t]he act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.”

I like this definition for two reasons. First, it clearly defines crowdsourcing as an act of going outside of the organizational borders, which is obviously the principal hallmark of any open innovation approach. Second, it points to the major difference between crowdsourcing and other open innovation tools: the identity of open innovation partner(s). When you collaborate (co-create) with your customers, suppliers, academic and business partners or startups – by engaging your “innovation ecosystem,” as Michael Doherty would say – you deal with defined partners, the partners whose identity is known to you. In contrast, crowdsourcing involves undefined partners, the members of a crowd whose identity is unknown to you, at least initially.

The presence of a large number of engaged partners – and the need to ensure that they all act independently of each other for a crowdsourcing campaign to be effective – dictates the use of online methods of aggregating of the received knowledge/information, either through external innovation portals or by using commercially available crowds provided by the so-called open innovation intermediaries. (Co-creation at the same time still largely relies on face-to-face interactions.)

With this in mind, I want to propose my definition of open innovation as a combination of co-creation and crowdsourcing:


Or, in short, open innovation = co-creation + crowdsourcing.

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , , , , , , , , , | 10 Comments

Crowdsourcing is a “sourcing” not just a “crowd”


As I admitted on one occasion, I hate being a terminology cop. Yet my professional life, both as a bench scientist and innovation manager, provides plenty of examples of a mess that ensues when people start discussing things without first agreeing on their meaning.

For this reason, I couldn‘t pass over the recent piece by Claire Zulkey posted to Fast Company and titled “Are you crowdsourcing or wasting your time?” In her piece, Ms. Zulkey describes joining a private Facebook group for writers and journalists. While having benefited from getting some solid professional advice, Ms. Zulkey realized, at some point, that she had “drifted from utilizing the group to just killing time in it.” This led Ms. Zulkey to conclude that “[t]here comes a point where the convenience and helpfulness of crowdsourcing doubles back on its own productiveness.”

Wait a minute! Since when has Facebook become a crowdsourcing platform? Wikipedia defines it as “online social media and social networking service.” Sure, Facebook may boast 1.79 billion active monthly users, one of the hugest crowds ever assembled on Earth for any occasion, but this doesn’t automatically make it a crowdsourcing venue. (As a well-known saying puts it: “All that glitters is not gold.”)

The most comprehensive and widely accepted description of crowdsourcing–proposed by Jeff Howe in 2006–defines it as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” In other words, crowdsourcing is not just about a crowd; it’s about outsourcing a job. And with all due respect to the quality of the advice that Ms. Zulkey is getting on Facebook, asking a crowd for editors’ email addresses or recommendations for a good restaurant, is not outsourcing a job, at least not in the sense most organizations and ordinary people would define the term “job.”

Another problem with Ms. Zulkey’s expansive definition of crowdsourcing is that she seems to be confusing the latter with another popular form of group interaction: brainstorming. When you brainstorm, you do put a question to a number of people and then let them provide the answers, or capitalize on answers of other people, or redefine the question itself. Now, replace a group of eight to ten participants, supposedly the optimal number of people on a brainstorming team, with a crowd of dozens or even hundreds, move this discussion online–and you get a Facebook group (or Quora or Ask Metafilter, two other supposedly crowdsourcing venues mentioned in Ms. Zulkey’s post).

But crowdsourcing is different from brainstorming in one very important aspect: it requires independence of opinions. When you run a crowdsourcing campaign, you make sure that the members of your crowd, either individuals or small teams, provide their input independently of the opinion of others. It is this aspect of crowdsourcing that results in the delivery of highly diversified, original and often unexpected solutions to the problem (as opposed to brainstorming that almost often ends up in the group reaching a consensus).

Online communities provide us with a vast number of diverse options of interactions with other people. Every option is a tool of sorts, and as a tool, it should be used with a full understanding of what this tool can or can’t do. Pretending that all the tools are the same because you can find them in the same toolbox won’t make them any sharper.

Image credit:

Posted in Crowdsourcing | Tagged , , , , , , , , , , | 1 Comment

We’re long past the stage of asking whether crowdsourcing can do this or that


Recently, I came across an article describing how Chinese smartphone manufacturer ZTE is engaging its customers in crowdsourcing the design of a new device. I admit that it was the article’s title, not content, that caught my attention: “Can crowdsourcing lead to better smartphone concepts design?” My first reaction–that is, even before opening the article–was like, why not? If Fiat Brazil could use crowdsourcing to design a new car, Fiat Mio, then why the same approach can’t be used to design a smartphone? Why asking such a rhetorical question?

And then I realized that I myself am guilty in asking similar questions. Two years ago, I was wondering whether crowdsourcing could beat Ebola (how is this different from asking whether crowdsourcing could stop Zika?); a year ago, I was questioning the ability of crowdsourcing to solve the refugee crisis in Europe; and, finally, a couple of months back, I was asking whether crowdsourcing could fix someone’s marital problems.

The truth is that we’re long past the stage of asking whether crowdsourcing can do this or that. Although, unfortunately, still seldom used by organizations, crowdsourcing has definitely proven its worth as a powerful innovation tool. Moreover, as I argued quite recently, a properly designed crowdsourcing campaign can allow organizations to simplify their search for solutions to their problems and get diversified, original and often even unexpected solutions. Besides, compared to other innovation tools, crowdsourcing is remarkably cost-effective.

So the real questions now are how to make crowdsourcing a mainstream innovation tool and how to expand the field of its applications from solving “simple” technical questions to addressing complex problems like providing medical diagnoses online. Two conditions have to be met for this to happen. First, organizations have to learn how to properly formulate the question to be put for crowdsourcing. Second, the ability to rapidly create sufficiently large and diversified crowds to deal with this question (or skillfully use commercially available ones) must be perfected.

Image credit: Barry Faulkner, “The Day of Decision” (

Posted in Crowdsourcing | Tagged , , , , , | Leave a comment

I know you, I know you not. (How we find experts.)

knightatcrossroadsIn one of my previous posts I wrote that when facing a problem the majority of organizations have a natural inclination to begin the problem-solving process with engaging experts. Such an approach makes sense when an organization dealt with a similar problem in the past and knows people who could potentially solve a new problem too. (Actually, many large organizations have a host of pre-selected consultants for each area of strategic interest.)

But what if you face a problem you haven’t met in the past–and in our fast-changing environment this happens more and more often to more and more organizations? Where will you go to find experts in this uncharted territory? How will you know that the experts you’re going to hire are really good? And even more basic question: how do you know that someone is or is not an expert in this particular field?

The answer to the last question may look deceptively simple. Well, all experts are supposed to be present on LinkedIn or any another of a plethora of similar professional networks. You go there, type appropriate “key” words and, bingo, here is a list of everyone who might be considered an expert.

As a curious example of the perils of such a “targeted” search, here is the story of the 2006 ALS Biomarker Grand Challenge sponsored by Prize4Life, a nonprofit organization dedicated to finding a cure for Amyotrophic Lateral Sclerosis (a.k.a. Lou Gehrig Disease). The purpose of this crowdsourcing campaign (run on the online platform provided by InnoCentive) was to find an effective biomarker that could measure the disease progression (or regression in case of clinical trials) in ALS patients.

In 2009, Prize4Life awarded two “progress prizes” for solutions that had made the most significant progress towards meeting the final criteria of the challenge. One winner was Dr. Seward Rutkove, a neurologist at Beth Israel Deaconess Medical Center in Boston and a prominent researcher in the field of neuromuscular disorders, such as ALS. (Dr. Rutkove went on to win the $1 million Grand Prize in 2011.) But the other “progress prize” was awarded to Dr. Harvey Arbesman, a doctor in private practice in a suburb of Buffalo and someone virtually unknown in the ALS community. And why is that? Because Dr. Arbesman was a…dermatologist with no formal ties to the field of neuromuscular diseases. Although Dr. Arbesman’s biomarker did not fully meet the Challenge criteria, the sponsor of the Challenge immediately appreciated the potential of this biomarker in providing valuable insight into the fundamental mysteries of the disease.

Was there any chance for Dr. Arbesman to be selected as an expert by any organization willing to start working on finding ALS cure? No. But Prize4Life didn’t start with selecting experts; it started with formulating a problem to be solved and then talking to everyone who showed proven capability to solve this particular problem, regardless of their formal expertise, medical certification or place of employment. Prize 4Life didn’t go around looking for a solution. Instead, they announced that they had a problem and then waited until right solution would find them.

As a result, Prize4Life went from literally nothing to a fully validated ALS biomarker in a matter of 3-5 years, a feat that normally requires at least twice of that to perform. And this is not a small matter, given that that most ALS patients die within 2-5 years of diagnosis.

That is what I mean by saying that a properly designed crowdsourcing campaign is very cost-effective.

Image credit: Viktor Vasnetsov, “Knight at the Crossroads” (1878)

Posted in Crowdsourcing | Tagged , , , , , , , , , , , , | 2 Comments