Crowdsourcing: two approaches, two objectives

In my previous post, I reminded the original definition of crowdsourcing by Jeff Howe: “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” I emphasized that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost.

I further outlined two major jobs that can be outsourced via crowdsourcing, adding capacity and accessing expertise, and gave definitions of both. Some of the readers have asked me to elaborate on the difference between the two approaches to crowdsourcing. Here is what I came up with.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking the crowd to deliver these small pieces. The members of the crowd usually don’t need any special training to perform the job. However, it’s the responsibility of the project sponsor to provide the crowd with a clear direction on how each piece of the job should be completed. It’s also the sponsor’s responsibility to design a protocol for assembling the whole job from its sub-components.

Organizations use the adding capacity crowdsourcing when the desired job requires the amount of resources organizations don’t have. Take, for example, the Common Voice project by Mozilla. Common Voice is a dataset that consists of about 1,400 hours of recorded human voice samples from more than 42,000 contributors in 18 different languages. Obviously, Mozilla couldn’t have composed such a dataset using only its own 1,200 employees.

The very objective of the adding capacity crowdsourcing poses a requirement with regards to the size of the crowd. In most cases, the larger crowd for adding capacity, the better. For example, adding additional contributors to the Common Voice project would have allowed Mozilla to expand the dataset, both in terms of recorded hours of speech and the number of covered languages.

I define accessing expertise as the extraction of the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, experience, and skills originating anywhere outside an organization. (In case of internal crowdsourcing, the accessed expertise will originate anywhere within the organization, but outside the unit that is sponsoring the crowdsourcing project.)

Organizations use the accessing expertise crowdsourcing when they want to solve a problem, the problem that prevents the organization from achieving an important objective like designing a new product, completing a project, or optimizing performance. When launching an accessing expertise crowdsourcing campaign, the campaign sponsor must clearly define the problem and explicitly outline the requirements all successful solutions are expected to meet.

The members of the crowd should possess certain knowledge, expertise, and skills to be able to solve the problem – and the more complex the problem, the more experienced the members of the crowd should be.

Moreover, many complex technical and business problems require completely novel, unexpected, and even unorthodox solutions – meaning that the pool of incoming contributions should include many different ways of solving the problem. This objective of the accessing expertise crowdsourcing poses a specific, unique for this approach, requirement for the crowd: it must be very diverse to provide the needed diversity of the incoming solutions. On the other hand, the crowd size by itself is, perhaps, a secondary consideration for accessing expertise crowdsourcing but larger crowds are usually more diverse.

Understanding the difference between the two approaches to crowdsourcing – and the rules they are governed by – is very important because the lack of such understanding is a frequent cause of failure of crowdsourcing campaigns.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | Leave a comment

What is crowdsourcing?

In recent years, crowdsourcing has become a popular topic in business publications and social media. Yet, its acceptance as a practical problem-solving tool has been slow. Why? Because there is a widespread, often completely paralyzing, uncertainty over what crowdsourcing is and what it can (or can’t) do. As a result, crowdsourcing is often used in the wrong way, and when the outcome proves disappointing, it is crowdsourcing itself that gets the blame for being “ineffective.”

First of all, it’s important to prevent the expansive use of the term “crowdsourcing” and keep a clear distinction between crowdsourcing and other communication and problem-solving tools, such as online networking and brainstorming. Equally important is to provide a clear explanation of what crowdsourcing can do for organizations to achieve their strategic innovation objectives.

Let me start with a definition of crowdsourcing – the original proposed by Jeff Howe in 2006 – which I still consider the most comprehensive and precise. Howe defined crowdsourcing as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.”

What is very important in this definition is that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost in our conversations about crowdsourcing.

I believe that there are two major types of “jobs” organizations can outsource using crowdsourcing: adding capacity and accessing expertise.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking a crowd of contributors to perform the whole job by delivering smaller components. Another term for adding capacity is “microtasking,” with Mechanical Turk being the most prominent microtasking marketplace.

Organizations would use adding capacity crowdsourcing when the completion of a job requires the amount of human resources organizations can’t provide on their own. This type of crowdsourcing usually doesn’t require any substantial training of the crowd. However, organizations must provide the members of the crowd with clear directions on how to precisely accomplish the required “mictrotask.” Organizations also must develop a robust protocol of collecting, collating, and interpreting the combined results.

(A more sophisticated version of adding capacity crowdsourcing, a concept of a “flash organization,” has been developed to deal with complex, open-ended tasks that can’t easily be broken into smaller identical parts.)

I define accessing expertise crowdsourcing as a process of exploring the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, and skills from anywhere outside the organization (or anywhere outside a particular function or unit in an organization if we deal with internal crowdsourcing). In my opinion, there is no established academic term for accessing expertise crowdsourcing, although the term “crowdsourced innovation” comes very close.

Accessing expertise crowdsourcing can be further divided into idea generation and problem-solving, which I proposed calling the “bottom-up” and “top-down” crowdsourcing, respectively (and wrote about benefits and drawbacks of both here and here).

Both major types of crowdsourcing, adding capacity and accessing expertise, follow their own rules of engagement which must not be confused if organizations want to use crowdsourcing effectively and efficiently. I’ll cover these rules in more detail in the upcoming posts.

Images provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | Leave a comment

Is your shovel good enough to hit the nail?

Imagine you’re outside and need to hit the nail into the wall to hang a picture. You select the nail of the correct size and then look around for an appropriate hitting tool. You pick up a new, shiny shovel that you recently bought in the nearest Home Depot, aim it at the nail and – bang! – throw a punch. The nail bends in response. You look disapprovingly at the shovel and say: “Such a fancy tool – and not a cheap one, too! – but completely useless.”

Sounds ridiculous, right? But that’s what often happens when people and organizations decide to use a new “shiny” tool without figuring out first how to properly handle it. Facing disappointing results, frustrated operators conclude: “This tool doesn’t work for me/us.”

Crowdsourcing is one of such new shiny tools. In recent years, its popularity has skyrocketed. Unfortunately, along the way, the definition of crowdsourcing has lost its original meaning. It became synonymous with just about every event happening online, especially if this event engages a substantial number of people.

Mixing crowdsourcing with online networking is a frequent mistake – that’s why we have oxymorons like “crowdsourcing on Facebook” (or Twitter, or Yelp).

Another popular mistake is confusing crowdsourcing with brainstorming. In brainstorming, a question is presented to several people who’re asked to come up with answers. As brainstorming session progresses, people propose their own ideas, capitalize on ideas of others, or, perhaps, redefine the question itself. Many folks mistakenly believe that if you replace a group of eight to ten (reportedly the optimal number of people for brainstorming) with a crowd of dozens or even hundreds, you’re not brainstorming but crowdsourcing.

But this is not crowdsourcing. Crowdsourcing is different from brainstorming in one very important aspect: it requires independence of opinions, a feature of crowdsourcing highlighted in the classic James Surowiecki’s book, “The Wisdom of Crowds.” In contrast to brainstorming, during a crowdsourcing campaign, you must make sure that the members of your crowd, either individuals or small teams, provide their input independently of the opinion of others. It’s this aspect of crowdsourcing that results in the delivery of highly diversified, original, and often completely unexpected solutions to the problem -as opposed to brainstorming that almost always ends up with the group reaching a consensus.

Why is important to keep a crisp border between crowdsourcing and other problem-solving tools, such as brainstorming? Because if we want organizations to start using crowdsourcing in their innovation practices, we need to ensure that they know the basic rules of applying this technique.

Take, for example, a 2017 Harvard Business Review article titled “Rethinking Crowdsourcing.” The article described a review of 87 crowdsourcing projects aimed at generating new consumer product ideas. In the course of each project, managers allowed participants to “like” each other submissions – a feature that doesn’t belong in crowdsourcing.

The result? Some contributors began to “like” each other’s ideas so that the apparent value of their respective contributions became overinflated. No wonder that when the submitted proposals were assessed by independent evaluators, no correlation has been found between most “likable” ideas and those that led to successful products.

The conclusion of the article was even more troubling: “It can be unwise to rely on the crowd.” Not an encouraging statement for those who want to start exploring what crowdsourcing can do for their organizations!

I was equally puzzled by another, more recent HBR article, “Research: for crowdsourcing to work, everyone needs an equal voice.” Sure, the authors, two academic researchers, have come up with a correct conclusion: “In order for the wisdom of crowds to retain its accuracy for making predictions, every member of the group must be given an equal voice, without any one person dominating.” Yet, their usage of the generic term “the wisdom of crowds” – while describing a process that mixed both crowdsourcing and brainstorming – made me somewhat uneasy.

It’s impossible to overestimate the role that academic research could play in making crowdsourcing a mainstream problem-solving tool. There are two things that I, a crowdsourcing practitioner, expect from my academic colleagues. First, a solid classification of existing types of crowdsourcing. Second, a clear definition of what crowdsourcing is and what it is not. Muddying the “terminology waters” isn’t helpful.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | Leave a comment

Does “process” kill innovation?

Reading Steve Blank is always a pleasure. Not only is he among the world’s best scholars of corporate innovation; his ability to explain complex things in a simple language is unparallel.

Blank’s recent HBR piece, “Why Companies Do ‘Innovation Theater’ Instead of Actual Innovation,” is no exception. He persuasively argues that as large organizations face continuous disruption, their ability to innovate is no more an “add-on”; it’s their way to survival. And yet, they consistently fail to innovate.

The reason, as Blank sees it, is that while transforming from ambitious startups bubbling with innovative ideas into mature commercial entities, organizations build “processes.” Although processes diminish the overall risk for an organization to malfunction, each layer of the process reduces the organizational agility and its responsiveness to new threats and opportunities. Eventually, the organizations begin to value “process” over the “product” – and that kills innovation.

At this point, corporate innovation becomes “innovation theater,” a set of “activities” that may build and shape culture but fail to come up with viable products. (This idea is very close to my heart: back in 2015, I fretted that organizations were faking innovation.)

While Blank’s explanation of what is wrong with innovation is right on point, as usual, I was surprised by his uncharacteristic reluctance to propose ways to address the problem. Sure, Blank argues that innovation activities and processes should be part of an overall plan, and his idea of an Innovation Doctrine is an intriguing one, however vaguely articulated.

At the same time, I’d disagree with Blank that processes as such hurt innovation. In my opinion, corporate innovation suffers not from the overabundance of processes but, quite to the contrary, from the paucity of them. We still don’t have a sustainable process to handle the proverbial “innovation funnel,” to move promising inventions and discoveries all the way from the front end of innovation to its back end.

That’s what we need to focus on. And we must hurry up, as the United States is losing its place at the top of the global innovation indexes. There is no time to waste.

 The image created by Tatiana Ivanov

Posted in Innovation | Tagged , , , , | Leave a comment

Are We Heading For Crowdsourced VR Health Care?

Every now and then you see a headline just like the one overhead, indicating that virtual reality is moving into some unexpected new industry or enterprise. It can all be a little bit dizzying, even if you’ve been following VR since its humble modern-era beginnings as a concept from Oculus demos. But what’s really interesting – particularly with regard to health care, one might argue – is exploring how a simple VR application in an unexpected area could, in theory, evolve or branch out. Here are a few examples of what I mean.

VR in Museums – Lots of museums offer virtual tours online, through which you can click through galleries and exhibitions. But now, venues as prestigious as the British Museum in London are partnering with VR companies to design full-fledged VR tours as well. It’s a whole new way for people to explore remotely. But think of the implications for tourism more broadly. Could more experiences like these lead to full VR city walking tours, incorporating multiple attractions at once?

VR in Casinos – Various components of casinos have been reimagined in VR. Naturally, a few simple poker experiences led the way. We’ve since seen some of the popular free slots displayed at international SlotSource platforms adapted to VR as well. But what if these slots, poker games, and other casino experiences weren’t one-off VR games? Could this lead to entire virtual casinos within which gamers could mingle and stop off at games of their choice?

VR in Cars – Racing was one of the early genres to really reach the potential of VR gaming. As a result, there are plenty of different VR driving experiences. Some are more realistic than others, but what if developers focused more on the realistic? Could VR driving be used to test young drivers? Or help people test drive cars they might want to buy in rapid succession? Or even help city planners to do practice runs of new traffic patterns?

The examples could carry on, but you get the idea. An individual VR experience that works well can hint easily at a whole category ripe for development. And that logic, when applied to VR in health care – and specifically diagnostics – is fascinating.

VR as a diagnostic tool has been buzzed about for years now, actually, even dating back to the days before the technology’s commercial availability. As VR has become better known though, this idea has taken clearer shape. Earlier this year, Wired did a relatively brief but helpful look at VR’s applications in diagnosing mental diseases (and in some cases treating certain conditions, like PTSD). The thinking, right now, is that through careful VR analysis, medical professionals can accurately determine what may be ailing a given patient.

Expand this concept beyond mental illness, however, and imagine applications with social components, and you can begin to see how the idea – like those examples listed above – could branch out significantly. VR examination apps with diagnostic components and social capabilities, specifically, would allow patients to broadcast their own injuries and ailments to people remotely, in order to receive diagnostic opinions. Ideally, that would mean physicians, but it’s highly possible it could mean other things too: friends, family, medical communities online, or even social network groups dedicated to various medical purposes.

In short, while it may seem like a stretch now, there’s a certain logic to the idea of near-future crowdsourced diagnostics in VR.

Posted in Uncategorized | Leave a comment

A confession of open innovation manager

Over the years, I’ve worked with many companies that tried to apply open innovation approaches to solving their problems. Some of them have succeeded, some have failed—with the rest falling in between. But there was one common feature that all successful companies shared: they had established internal innovation programs.

Please, note that I’m not saying “centralized” or even “formal” but established, meaning that internal structures and processes were already in place to identify issues (“jobs-to-be-done”) to be resolved by innovation tools. And yes, in many cases the programs were formal, led by corporate R&D, Marketing, or a specialized innovation unit. However, I can’t remember any company that succeeded in open (“external”) innovation if it was not preceded by the adoption of internal.

Why? Among many reasons, one stands out for me. Innovation requires extensive internal business development, a process by which members of the innovation team try to “sell” new ways of solving problems to other, often skeptical and reluctant, corporate functions and units. This isn’t easy by itself; it’s for a reason innovation is called change management process.

But with the added complexity that comes with open innovation, this internal business development may become a nightmare. A small open innovation unit (it always small because open innovation teams are routinely under-resourced) is struggling to find internal “clients” to do things that sound complicated and often counterintuitive. It’s like going door-to-door around a neighborhood offering to buy a product no one has heard about.

When a newly founded open innovation team joins a larger corporate innovation function, selling its “products” becomes more organic and therefore more manageable. When the open innovation team arrives in an empty space, nothing works.

For someone who for the past 15+ years has been preaching the virtues of open innovation, this might be a strange confession to make. But I’ll make it nonetheless: there is no such thing as open corporate innovation.

There is innovation, a process deeply rooted in the corporate strategy and operations that addresses the company’s most strategic issues. This innovation has a single body one side of which represents tools utilizing the collective wisdom of the company’s employees. The other side of this body extends behind the corporate walls trying to reach out to the diverse pools of external talent.

Creating open innovation programs without establishing internal ones looks to me like a tree without roots. Or, perhaps, a house with a roof but without walls.

Image credit: Tatiana Ivanov

Posted in Innovation | Tagged , , | Leave a comment

Measuring innovation, one patent at a time (or all of them at once)

Measuring innovation is tough. To begin with, innovation is rooted in creativity—and measuring creativity isn’t straightforward, to say the very least. Besides, innovation is about transforming creativity into value—and measuring value isn’t easy, either, even if you measure it in dollars (or any other currency).

But you’re still expected to measure it. As someone great (whether it’s Deming or Drucker) has said, you can’t manage what you can’t measure.

Currently, the most common proxy for innovation is patents. This is the measure of innovation that I used myself to compile a list of specific socioeconomic factors that favor or obstruct innovation.  More precisely, it’s a combination of the number of filed patents (innovation “quantity,” so to speak) and the index of their citation, which is supposed to reflect patent quality (on the assumption that the more a particular patent is cited, the more influential it is).

It’s easy to criticize patents as a measure of innovation. The patent quantity is obviously the least defensible of the two as the number of frivolous and simply “cloned” patents keeps rising. Unfortunately, as the pattern of patent citation changes over time, comparing the quality of patents issued at different times becomes almost useless, too.

Is there anything better than the patent number and citation index? Prof. Dimitris Papanikolaou of the Kellogg School of Management thinks so. Prof. Papanikolaou and his colleagues have decided to analyze the patent text. They reasoned that if a patent was truly groundbreaking, then it’s text would be unique, i.e., dramatically different from any previous patent. However, as subsequent inventors would start building on this patent, many follow-up patents would have similar text. The researchers, therefore, assigned higher quality scores to patents with text that did not resemble earlier patents but did resemble subsequent ones.

By analyzing the texts of more than nine million patents filed with the U.S. Patent Office since 1836, the researchers showed that their measure of patent quality very closely correlated with the patent citation index. Moreover, their measurement performed even better than the citation index in predicting breakthrough inventions of the 19th and 20th centuries, including the telegraph, television, plastics, and genetic-engineering technologies.

It’s interesting that the new measure of innovation is still patent-based. I wonder if a measurement based on something else can be created.

The image credit: https://www.inventright.com/help/blog/do-you-really-need-a-patent

Posted in Innovation | Tagged , , , | Leave a comment