Innovation portfolios, innovation toolboxes

Proposed by Ralph-Christian Ohr model of integrative innovation management is a set of practical recommendations that allows organizations to adopt a disciplined approach to the innovation process. Central to the model is the idea that organizations must build a balanced portfolio of innovation projects composed of both exploitation- and exploration-type initiatives arranged along the following three strategic horizons:

  • Core Innovation: optimizing existing products and business models for existing customers;
  • Adjacent Innovation: expanding to the “new to the company” markets;
  • Transformational Innovation: creating principally new products and business models to serve markets and customer needs that may not yet exist.

The major benefit of the 3-horizon innovation framework is that it shows to organizations how to structure, govern and, most importantly, fund innovation programs while successfully managing risks. Unfortunately, as formulated, it is silent with respect to specific innovation tools that organizations should use when approaching different innovation horizon.

Earlier, I proposed a “map” of open innovation tools and attempted to match some of these tools (here and here) to specific stages of business model innovation. In this piece, I want to discuss how three different open innovation tools—customer co-creation, engaging startups and external crowdsourcing—could be applied to the three innovation horizons.

I believe that customer co-creation is the most efficient when applied to the Core Innovation, when organizations deal with incremental improvements of their existing products and slight tweaks of their business models. This stems from the two-way interaction nature of customer co-creation that provides opportunities for early customer feedback, which allow organizations to rapidly test hypotheses and Minimally Viable Products (MVPs).

The ability of providing immediate feedback to MVPs expands the usability of customer co-creation into Adjacent Innovation. However, its efficiency here appears to be lower than in the Core Innovation horizon, because customers may have trouble to articulate their unmet needs when it comes to new products. It is for this particular reason—the inability to express the needs for something (be it a product or a service) that doesn’t yet exist–that customer co-creation becomes virtually useless in the case of Transformational Innovation.

This pattern gets reversed for engaging startups: this approach is useless when applied to Core Innovation, gains some strength in the Adjacent Innovation horizon and is at its best when used for Transformational Innovation. By their very nature, startups—at least the best of them–are entities created for the purpose of transformational change of the existing technology and/or business landscape. By engaging startups, organizations would “hire” players with the creativity, flexibility and audacity to challenge status quo that can rarely be found within internal R&D teams in most mature organizations.

I’m not saying that engaging startups is making Transformation Innovation easy. What I’m saying is that by creating a vibrant startup ecosystem, organizations can make this type of innovation possible and, perhaps, even sustainable. I’d even go as far as to claim that for the vast majority of large and mature organizations, engaging startups is the only Transformational Innovation tool they can use with repeated success.

Crowdsourcing stands out among other open innovation tools in the sense that it can be used, equally successfully, in all three innovation horizons. The reason for this “omnivorousness” lies in the very nature of crowdsourcing as an innovation tool. Crowdsourcing is essentially a question that one asks a crowd of people; the nature of the answer one receives from the crowd is determined, first and foremost, by the nature of the question. By properly formulating questions addressing problems and issues of increasing difficulty, complexity, time horizon, risk and ambitions—and helping crowds to come up with plausible answers to these questions–organizations can successfully apply crowdsourcing to all innovation horizons.

Now, again, I’m not saying that crowdsourcing is easy; before, I pointed out at its limitations at some stages of business model innovation. My point here is that the onus of properly formulating a question is on organizations—and this is something that can be learned—whereas crowds, as they have shown time and again, can come up with an answer to pretty much any question, regardless of its scope and nature.

p.s. You can view the first issue of my monthly newsletter on crowdsourcing here: http://mailchi.mp/79401a1a6ae8/crowdsourcing-laws-budgets-and-psalms. To subscribe to the newsletter, go to http://eepurl.com/cE40az.

Posted in Innovation | Tagged , , , , , , , , , , , , , , | Leave a comment

Know your customers and trust them too

Customer centricity—a framework that places the customer at the center of business operations—is gradually becoming a leading paradigm for new product and services development. Many firms employ a variety of marketing tools, including ethnography and netnography, to identify unmet customer needs (“jobs-to-be-done”) that could be potentially addressed by novel and, supposedly, improved offerings.

However, after the customer input has been collected and systematized, customer centricity gets rapidly forgotten, as the firms turn exclusively to internal R&D teams to address the newly identified customer needs. As the prevailing thinking has it, it’s only the firm’s own professionals (marketers, product developers, engineers, etc.), but not the customers, who have the knowledge and experience to come up with working ideas that could be realized in commercially successful designs.

Ironically, the assumption that the customers know what they need, but don’t know how to make it, is seldom tested—and, when tested, is proven wrong. In a 2012 article published in The Journal of Product Innovation Management, Poetz and Schreier compared novel product ideas generated by a firm’s professionals with those submitted by a “crowd” of users. The field of innovation was baby feeding products, and all the ideas were evaluated, blindly to their source, in terms of novelty, customer benefit and feasibility.

The study showed that the ideas generated by the users scored significantly higher in terms of novelty and customer benefits–and only slightly lower in terms of feasibility–than those proposed by the firm’s own designers. Moreover, it was found that the ideas that received the highest marks overall came predominantly from the outside users. So much for the internal expertise!

It’s tempting to argue that Poetz and Schreier’s study is an exception. As already mentioned, the field of innovation was baby feeding; the analysis of the users who took part in the idea generation process revealed that about 90% of them were females, many with firsthand experience in feeding babies and a sound technical knowledge of the related products. It’s easy to imagine how a recent mom with a solid technical background can come up with better ideas for baby feeding products than a (presumably all-male) team of professional designers. (As my wife often complaints, her hair dryer must have been designed by a bunch of bald males.)

Although I’d love to see Poetz and Schreier’s study replicated in other settings and different industries, I do believe that it has much broader implications. It yet again dispels a popular myth that crowds of problem solvers are composed of “amateurs” and that when answering a question requires knowledge and expertise, not just an opinion, crowds are becoming useless (or outright stupid). The truth is that properly assembled crowds are composed of experts. They may not work for your company, or in your field, or in your country; but they’re experts nonetheless.

To prove this point, one only needs to take a look at InnoCentive, a commercially available crowdsourcing platform with a solid track record of solving difficult scientific and business problems for corporate and non-profit clients. The InnoCentive proprietary crowd is composed of 375,000+ solvers, with 66% of them holding advanced degrees. I strongly suspect that some of them are women with a substantial experience in feeding babies.

p.s. You can view the first issue of my monthly newsletter on crowdsourcing here: http://mailchi.mp/79401a1a6ae8/crowdsourcing-laws-budgets-and-psalms. To subscribe to the newsletter, go to http://eepurl.com/cE40az.

Image Credit: http://www.superhealthykids.com/feeding-your-baby-solid-food-101/

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , , , , , | Leave a comment

Are crowds stupid?

We have been talking about the wisdom of crowds for so long and with such a passion that it was only a matter of time that someone would decide to call crowds stupid. And here it comes: Aran Rees, “a creativity coach, facilitator and communicator,” published what looks like a bona fide anti-crowdsourcing manifesto, “The Stupidity of Crowds.”

As a proof that crowds are stupid, Rees cites two recent examples. First, he mentions an attempt by a British government agency to crowdsource a research ship’s name, a process that ended up with the public having chosen a completely ridiculous option. Rees’s second example is Brexit, a decision by the British public, expressed during a 2016 referendum, to leave the European Union.

To me this sounds like shooting the messenger when you don’t like the message. Crowdsourcing is, first and foremost, a question that you ask a crowd; the quality of the question is the most important determinant of the quality of the answer. You ask the crowd a smart question–you have a chance to get a smart answer. You ask the crowd a stupid question–the answer will almost certainly be stupid.

I’d agree with Rees that asking crowds to come up with names and then vote for them is a stupid idea. So, in my opinion, was the idea of putting to a referendum the U.K.’s membership in the EU. But why should the blame rest with the crowds? Blame crowdmasters, the people who have posed these stupid questions to the crowds! Besides, as a matter of sheer decency, should we automatically call stupid people who have tastes, sense of humor and political preferences different from ours?

Rees’s criticism of crowds and crowdsourcing troubles me for two additional reasons. The first is his habit—unfortunately, a very common one—of overemphasizing the “crowd” part of the word “crowdsourcing.” Rees and many others tend to call crowdsourcing almost any activity involving a large group of people, especially if these activities happen online. But the original—and still, in my opinion, the best—definition of crowdsourcing, proposed by Jeff Howe in 2006, describes it as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.”

In other words, crowdsourcing is not just about a crowd; it’s about outsourcing a job. Asking people to vote in a survey or referendum is not outsourcing a job, at least not in the sense most organizations and reasonable people would define the term “job.” That’s why, by the way, in contrast to what Rees thinks, Facebook and Twitter are not crowdsourcing venues. (As Henry Mintzberg shrewdly suggested, ask your Facebook friends to help paint your house.)

The second problem is that Rees reduces crowdsourcing to simply generating ideas and/or voting for them. He thus rhetorically asks: “Do you suppose that someone taking five minutes to respond to a survey and someone else taking ten second to vote is likely to result in deeply thought out insights?”

Is this how Rees understands crowdsourcing? Has he ever heard about NASA using crowdsourcing to crack previously intractable space exploration problems? Has he ever heard about tremendously successful “Ebola Grand Challenge” sponsored by the USAID? Has he ever heard about open innovation boutiques, such as InnoCentive and NineSigma, which turned crowdsourcing into commercially viable business opportunities by serving, with very impressive success rate, their corporate, government and non-profit clients?

Crowdsourcing is extremely powerful problem-solving tool, but as any other tool, it requires knowledge and experience to be properly used. Those who know how to use crowdsourcing will succeed in harnessing the proverbial wisdom of crowds. Those who don’t won’t. It’s this simple.

Image Credit: http://www.bbc.com/future/story/20150324-the-hidden-tricks-of-persuasion

Posted in Crowdsourcing | Tagged , , , , , , , , , , , , | 1 Comment

Applying open innovation tools to business model innovation

Recently, I proposed a “map” of existing open innovation tools based on the identity of open innovation partner(s) and the mode of interaction between them. Three large “baskets” were included: co-creation (known partners, two-way interaction), crowdsourcing (unknown partners, two-way interaction) and webscouting (unknown partners, one-way interaction). I further suggested that the proposed map may help in matching specific open innovation tools with different types of innovation.

In the first example of such matching, I looked at the usefulness of internal and external crowdsourcing at three stages of business model innovation, the latter being defined by C.M. Christensen, T. Bartman and D. van Bever”as follows:

  • Market-creating innovations, the type of innovation that takes place at the first stage of business model formation when firms search for a meaningful value proposition,g., product or service that would fulfill unmet customer needs (“job to be done”).
  • Sustaining innovations, the type of innovation, which is an improvement in the firm’s core offerings, involving the creation of better products or services that can be sold for higher prices to existing customers.
  • Efficiency innovations, the type of innovation that kicks in when sustaining innovations no longer generate additional profitability and the firms turns to cost reduction, mainly by optimizing internal processes.

Here, I want to explore a match between stages of business model innovation and two other major open innovation tools: customer co-creation and webscouting.

The usefulness of both customer co-creation and webscouting is the highest when applied to market-creating innovations, the process of designing new products and services. Webscouting – in particular, netnography – would be indispensable in discovering unmet customer wants and needs. Its benefits include the ability to scout large numbers of potential customers and being less intrusive and expensive than ethnography and focus groups, two popular techniques in the customer co-creation basket. At the same time, the two-way interaction nature of customer co-creation provides opportunities for early customer feedback, allowing firms to test hypotheses and MVP’s. Using both tools in parallel may have a synergistic effect on the productivity of internal R&D teams.

Customer co-creation and webscouting are still highly effective when applied to sustaining innovations, allowing firms to collect customer feedback to their products and services. Again, using both tools in parallel would be the most beneficial as the depth of interaction with customers obtained through customer co-creation would be augmented by the large number of data points collected with the help of webscouting.

Webscouting, which, by definition, is the collection of customer insight on online forums, simply doesn’t apply to efficiency innovation. But customer co-creation, too, completely loses its power at this stage of business innovation: no customer, however close and trusted, can possess enough relevant information to be able to provide valuable insight with regards to the firm’s internal operations and cost structure, the hallmark of efficiency innovation.

The usefulness of customer co-creation and webscouting closely mirrors that of external crowdsourcing, which, too, is very efficient at the market-creating innovations stage, but gradually loses its efficiency at later stages. So far, I’m aware of only one open innovation tool that could be successfully applied to the efficiency innovation stage: internal crowdsourcing. That being said, I fully realize that the very classification of internal crowdsourcing (i.e., crowdsourcing conducted within the legal boundaries of the firm) as an open innovation tool could be questioned.

I’d also like to add that when talking about customer co-creation I had in mind the group of techniques specifically engaging existing and prospective customers, lead users (and, to a lesser extent, suppliers and academic and business partners). Left aside were approaches involving engaging startups (startup incubators/accelerators, joint ventures, etc.) that I also consider tools in the larger co-creation basket. Matching these open innovation tools to different types of innovation deserves separate analysis.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , | Leave a comment

Why does your doctor hate crowdsourcing?

One of the signs of the growing popularity of crowdsourcing is its steady expansion into gated professional communities dominated by experts. Take, for example, medical diagnostics, an area that has for centuries been a fiefdom of medical professionals. Now, a bunch of platforms (CrowdMed being, perhaps, the most prominent) has emerged to provide patients with online medical diagnosis, especially diagnosis of rare conditions that have been missed by doctors.

Hardly surprisingly, the visitors aren’t particularly welcome. Back in September, I wrote about Dr. Joshua Liao, a resident physician in the Department of Medicine at Brigham and Women’s Hospital and a clinical fellow in medicine at Harvard Medical School, who used the pages of the Boston Globe to express his grudge against crowdsourcing.

Dr. Liao was approached by son of his patient, asking the doctor about CrowdMed as a potential help in establishing diagnosis for the patient’s mother. Dr. Liao’s response was no. He explained that a solid diagnosis requires not only patient’s medical history, which supposedly can be provided to a crowd, but, more importantly, direct medical examination of the patient, which can be not. For this reason, in his opinion, services like CrowdMed “produce more questions than answers, and more confusion than direction.”

I see Dr. Liao’s point, because I do understand the value of in-person medical examination. What troubles me with his argument is at least a dozen of “patient success stories” posted to the CrowdMed website, the testimonies by the people who were apparently helped by the crowd after their own doctors failed to do so. How are we to treat these “success stories”? As a fluke? As fake news, a popular accusation these days? Or as examples that in some cases, in spite of what Dr. Liao says, crowdsourcing can really deliver something that a single expert, however accomplished and experienced, can’t?

(And, mind you, I’m not even touching upon a sensitive issue of diagnostic mistakes committed by doctors themselves – a whopping 12 million a year of them – that account for 210,000 to 440,000 U.S. deaths annually).

As you can imagine, Dr. Liao isn’t the only doctor who hates crowdsourcing. Just a few days ago, I came across a piece written by Dr. Jeff Russ, a pediatrician. Dr. Russ’s criticism of crowdsourcing (which he attributes to the rise of medical “populism”) is arguably more grounded and sophisticated than Dr. Liao’s. And yet, when Dr. Russ insists that “crowdsourcing [relies] on the hope that sufficient quantity will outweigh…quality”, I can’t stop from asking myself whether Dr. Russ understands what crowdsourcing really is.

Crowdsourcing is not about quantity; it’s not even about quality. It’s about diversity. Crowdsourcing works not because crowds can generate numerous responses of varying quality, but because crowds can come up with unexpected, often completely unorthodox, solutions. And it’s the lack of diversity – diversity of opinions, sources of information and experiences – that forces professionals, in the medical field and elsewhere, to commit mistakes.

So I have a simple advice for Dr. Liao, Dr. Russ and his colleagues: don’t reject crowdsourcing; embrace it. Accept crowdsourcing as one of the many diagnostic tools in your repertoire. Obviously, there is no need to run a crowdsourcing campaign in case of a simple cold. However, when dealing with a complicated, potentially life-threatening, case (as the one described by Dr. Russ in his piece), use crowdsourcing as means of last resort, after all other diagnostic approaches have been exhausted. You may come up with nothing – well, no harm is done then. You may come up with a bunch of trivial ideas that you already tried and rejected – well, you know now that at least you haven’t missed something obvious. Or you may come across a suggestion that you never thought about – because it’s so unexpected and unusual. It might not work in the end, true, but it will push your thoughts in the direction you didn’t envision before.

Worse a try, if a patient’s life is in danger (if you ask me, a patient).

Image credit: http://www.kevinmd.com/blog/2016/06/are-you-the-physician-who-yells.html

Posted in Crowdsourcing | Tagged , , , , , , , , | Leave a comment

A map of open innovation practices


A problem that I see with the current literature on open innovation is that while focusing predominantly on theoretical aspects of the concept (value proposition, strategic alignments, governance and management, human capital and culture), it pays little attention to the description of specific open innovation practices. As far as I know, a 2013 study by Henry Chesbrough and Sabine Brunswicker remains the only recent publication presenting a systematic list of open innovation practices used by American and European companies. (I’d be very grateful to anyone pointing me to the additional existing data on the topic.)

Earlier, I presented my own “map” of the existing open innovation practices. In contrast to Chesbrough and Brunswicker, who classified them based on the knowledge flow direction (inbound vs. outbound) and knowledge flow financial nature (pecuniary vs. non-pecuniary), I built my classification around the identity of the open innovation partner(s).

This distinction sets apart two major groups of open innovation practices: co-creation and crowdsourcing. When firms collaborate (co-create) with their customers, suppliers, academic and business partners or engage startups via innovation ecosystems, they deal with defined partners, the partners whose identity is known to them. In contrast, crowdsourcing involves undefined partners, the members of a crowd whose identity is unknown to the firm, at least initially. The major specific crowdsourcing practices include innovation contents, external innovation portals and the use of open innovation intermediaries.
The presence of a large number of partners – and the need to ensure that they all act independently of each other to make crowdsourcing campaigns effective – dictates the use of online methods of aggregating the incoming knowledge. At the same time, co-creation still largely relies on the face-to-face interaction.

Now, I’d like to add two more elements to my “map.” First, I believe that a better place for lead users (previously placed in the co-creation basket) would be at the intersection between co-creation and crowdsourcing. On the one hand, engaging lead users represents co-creation in its classic form. On the other, the large number of engaged lead users justifies the employment of online tools of knowledge gathering (like in case of crowdsourcing), rather than face-to-face interactions. The Audi Virtual Lab – a project that involved 7,000+ customers in the co-development of the Audi in-car multimedia system – is a great example of an advanced lead user application.

Second, I propose adding the third group of open innovation practices: webscouting. One specific practice in this basket is netnography (a combination of “internet” and “ethnography”): collecting insight (needs and wants) of social-media-active consumers by following their conversations on various online forums. The other is the social media solution scouting, a practice very similar to – and in some cases even overlapping with – netnography: searching online venues for already existing (i.e. generated by consumers) solutions to the firm’s problems. (Here, I leave aside potential ethical issues and IP complications that could result from such a mode of information gathering.)

What unites webscouting with crowdsourcing is that both deal with unknown, undefined open innovation partners. What sets webscouting apart from both crowdsourcing and co-creation is the one-way (passive) mode of interaction between the partners: collecting knowledge from the customers without providing them with feedback. In contrast, crowdsourcing and co-creation both imply the two-way mode of interaction between the partners, either online or in person.

The proposed “map” may help in more precise matching between specific open innovation practices and different stages of business model innovation.

(I want to thank Kevin McFarthing for his suggestion to add inbound and outbound licensing to the co-creation basket).

Posted in Crowdsourcing | Tagged , , , , , , , , , , | 2 Comments

Don’t confuse crowdsourcing with brainstorming

eqb0011-blogphoto_0225_original-1024x547I try to follow what academic researchers have to say about crowdsourcing. As a crowdsourcing practitioner, I welcome the clarity, holistic approach and intellectual vigor that academic research brings to the table.

But not always. Take, for example, a recent Harvard Business Review article by Linus Dahlander and Henning Piezunka, “Why Some Crowdsourcing Efforts Work and Others Don’t.” Based on the data presented in their 2014 research paper, Dahlander and Piezunka set out to explain “why some organizations succeed to attract crowds and others fail.”

I have to say that the very choice of examples of successful and failed crowdsourcing campaigns Dahlander and Piezunka mention in their HBR paper is already confusing. They don’t seem to understand the difference between different crowdsourcing venues: open innovation competitions (Netflix’ 2009 $1 million prize), external innovation portals (Starbucks’ MyStarbucksIdea.com), open innovation intermediaries (NASA’s cosmic rays challenge) and online surveys (the German Pirate Party initiative). Sure, all four represent practical ways of using crowdsourcing, but the rules and conditions driving their success or failure are so different that throwing them into the same mix reminds me of the proverbial comparison of apples and oranges.

The confusion doesn’t go away when theauthors turn to practical recommendations on actions that managers could take to ensure success of their crowdsourcing campaigns. First, Dahlander and Piezunka suggest that instead of waiting for external contributions, organizations should first present their own ideas and then invite crowds to discuss these ideas. Dahlander and Piezunka argue that by showing to the crowds the types of ideas the organizations are interested in they motivate potential contributors to submit their own ideas.

Second, Dahlander and Piezunka insist that organizations should publicly discuss already submitted ideas and indicate which of them are most valuable. The authors argue that by showing to the current and future contributors their interest, organizations will further energize the submission process, ultimately resulting in the larger number of submitted ideas.

I see two major problems with Dahlander’s and Piezunka’s piece. To begin with, the authors seem to define the success of a crowdsourcing campaign predominantly by the size of the assembled crowd (“why some organizations succeed to attract crowds and others fail.”). This is completely misleading. Organizations turn to crowdsourcing to solve problems, and the only proof of success that really matters is whether the problem was solved or not. Of course, the size of the crowd plays an important role – in general, the larger the crowd, the higher the chance of solving the problem – by as any crowdsourcing practitioner would tell you, the major prerequisite for success is the ability to correctly define and articulate the problem you’re trying to solve by. If the problem is defined correctly, it could be solved even by a small crowd (I know such cases); if it’s defined incorrectly – so that you’re solving the wrong problem – no crowd, regardless of its size, will help (and I know such cases, too). By the way, are Dahlander and Piezunka aware of the fact that the winning solution for the Netflix $1 million prize, the one among 44,000(!) submitted, was never implemented? You call it success? I don’t.

Even more troubling, Dahlander and Piezunkaappear to confuse crowdsourcing with brainstorming. It’s brainstorming, not crowdsourcing, that requires throwing in an idea and asking other folks to comment on it – and it doesn’t matter whether you’re asking a few people in the room or a few thousands online. Now, I have nothing against brainstorming; yet one has to realize that by virtue of the collective discussion it implies, brainstorming almost always ends up with a consensus decision – or, worse, with a decision pushed forward by a vocal minority.

In contrast, when you crowdsource, you start with a problem you want to solve and then
provide your crowd with a clear sense of how a successful solution will be selected. Then you let the crowd do the job. Being unburdened with your prior “ideas”, the crowd will come with solutions that have only one goal in mind: to solve your problem. That’s why so often crowdsourcing campaigns deliver completely unexpected, even unorthodox, solutions.

Image credit: http://www.clixmarketing.com/blog/2015/06/18/why-good-old-fashioned-brainstorming-is-important-in-ppc/

Posted in Crowdsourcing | Tagged , , , , , , , , | 2 Comments