Being an expert: traveling the same road again and again

There are several reasons for the slow adoption of crowdsourcing as a practical problem-solving tool.

One of them is the lack of trust in the intellectual power of the crowd, its ability to tackle complex problems. Almost everyone would agree that the proverbial wisdom of crowds can be applied to a “simple” task, such as creating a corporate logo or naming a city landmark. However, when it comes to answering a question that requires specialized knowledge, organizations prefer to turn to experts.

This preference obviously sits well with the experts themselves. They’re often scornful of the idea that someone with no immediate experience in the field can solve a problem that they could not. This sentiment was eloquently summarized in a 2010 article: “Our trust in the expert appears to be increasingly supplanted by a willingness to rely on the knowledge derived from crowds of amateurs.”

“Crowds of amateurs.” Harsh words, huh?

Pitting experts against crowds is plain silly. Experts represent an essential part of any crowdsourcing campaign; in fact, crowdsourcing is impossible without experts. Only experts can identify and properly formulate problems facing organizations. Only experts can properly evaluate incoming external submissions to select those that make sense. Only experts can successfully integrate external information with the knowledge available in-house. It’s only at this midpoint of the problem-solving process – at the stage of generating potential solutions to the problem – that crowds are usually superior to experts.

Why? A recent study in the field of neurobiology provides useful insight. A team of scientists from Cold Spring Harbor Laboratory led by Dr. Anne Churchland analyzed neuronal activity in the brains of mice forced to learn new decision-making skills.

As the mice progressed through learning new tricks, more and more neurons in their brains got involved. However, the neuron activity rapidly became very selective: the neurons only responded when the mice made one choice and not another. This pattern became even stronger as the mice learned how to do a task better (i.e., became an “expert” in this task). Moreover, when the expertise was fully achieved, the mouse’s brain was ready for that expert decision even before the mouse began executing the task.

In other words, the “expert” mice know how to solve the problem even before starting to solve it!

In contrast, the neuronal activities in the brains of “non-expert” mice remain non-selective – meaning that the mice would approach the task with an “open mind.”

Were these findings held for humans, the implication would be that experts approach the problem with the patterns that are already pre-formed in their brains by their prior experience. In contrast, amateurs may approach the same problem from a completely different angle – and the more amateurs are involving in solving the problem, the more chance that a completely novel, unorthodox solution could be found.

That means that when solving a problem requires prior experience (e.g., when solving a similar problem as in the past), organizations should engage experts. However, if the problem is novel and may require a fresh look at it, crowds would be a better choice.

There is no sense to discuss which tool, experts or crowds, is better. They are different, complementary tools in the innovation management toolbox. Each should be used at its proper time and place.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | Leave a comment

“Fail often” but not too often

“Failing fast and often” has become an innovation mantra. Of course, not everyone takes this wisdom at face value. Even more tellingly, no one has taken the trouble to explain what “fast” and “often” precisely mean when applied to failure.

Now, some scientific data seems to have emerged, thanks to the team led by Robert C. Wilson from the University of Arizona, Tucson. Dr. Wilson and his colleagues examined the role of difficulty of training on the rate of learning. They found that the rate of learning is maximized when the difficulty of training is adjusted to an optimal level. They further found that the maximum learning takes place when the optimal training accuracy (a measure of difficulty) is about 84% or, conversely, when the optimal rate of training error is around 16%. In other words, one should be five times more right than wrong to learn successfully.

Sure, I understand the difference between innovation and the learning process “in case of binary classification tasks and stochastic gradient-descent-based learning rules” studied by Dr. Wilson’s team. Sure, I understand that innovation is a lot of experimentation, and experimentation implies a lot of failures.

What I don’t understand is our obsession with “failure,” with treating it as an end, not a means, of the innovation process. (And I definitely refuse to celebrate failures.) What I don’t understand is our willingness to replace the data-driven innovation discovery with a primitive A/B testing.

In order to succeed in innovation, we need a few things preceding experimentation. We need innovation strategy; we need innovation processes; we need innovation metrics, training, and incentives. That what will make our experimentation more efficient and repeatable than winning in a lottery.

Image is taken from the article by Wilson et al. (2019)

Posted in Innovation | Tagged , , , , | Leave a comment

Crowdsourcing: two approaches, two objectives

In my previous post, I reminded the original definition of crowdsourcing by Jeff Howe: “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” I emphasized that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost.

I further outlined two major jobs that can be outsourced via crowdsourcing, adding capacity and accessing expertise, and gave definitions of both. Some of the readers have asked me to elaborate on the difference between the two approaches to crowdsourcing. Here is what I came up with.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking the crowd to deliver these small pieces. The members of the crowd usually don’t need any special training to perform the job. However, it’s the responsibility of the project sponsor to provide the crowd with a clear direction on how each piece of the job should be completed. It’s also the sponsor’s responsibility to design a protocol for assembling the whole job from its sub-components.

Organizations use the adding capacity crowdsourcing when the desired job requires the amount of resources organizations don’t have. Take, for example, the Common Voice project by Mozilla. Common Voice is a dataset that consists of about 1,400 hours of recorded human voice samples from more than 42,000 contributors in 18 different languages. Obviously, Mozilla couldn’t have composed such a dataset using only its own 1,200 employees.

The very objective of the adding capacity crowdsourcing poses a requirement with regards to the size of the crowd. In most cases, the larger crowd for adding capacity, the better. For example, adding additional contributors to the Common Voice project would have allowed Mozilla to expand the dataset, both in terms of recorded hours of speech and the number of covered languages.

I define accessing expertise as the extraction of the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, experience, and skills originating anywhere outside an organization. (In case of internal crowdsourcing, the accessed expertise will originate anywhere within the organization, but outside the unit that is sponsoring the crowdsourcing project.)

Organizations use the accessing expertise crowdsourcing when they want to solve a problem, the problem that prevents the organization from achieving an important objective like designing a new product, completing a project, or optimizing performance. When launching an accessing expertise crowdsourcing campaign, the campaign sponsor must clearly define the problem and explicitly outline the requirements all successful solutions are expected to meet.

The members of the crowd should possess certain knowledge, expertise, and skills to be able to solve the problem – and the more complex the problem, the more experienced the members of the crowd should be.

Moreover, many complex technical and business problems require completely novel, unexpected, and even unorthodox solutions – meaning that the pool of incoming contributions should include many different ways of solving the problem. This objective of the accessing expertise crowdsourcing poses a specific, unique for this approach, requirement for the crowd: it must be very diverse to provide the needed diversity of the incoming solutions. On the other hand, the crowd size by itself is, perhaps, a secondary consideration for accessing expertise crowdsourcing but larger crowds are usually more diverse.

Understanding the difference between the two approaches to crowdsourcing – and the rules they are governed by – is very important because the lack of such understanding is a frequent cause of failure of crowdsourcing campaigns.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | Leave a comment

What is crowdsourcing?

In recent years, crowdsourcing has become a popular topic in business publications and social media. Yet, its acceptance as a practical problem-solving tool has been slow. Why? Because there is a widespread, often completely paralyzing, uncertainty over what crowdsourcing is and what it can (or can’t) do. As a result, crowdsourcing is often used in the wrong way, and when the outcome proves disappointing, it is crowdsourcing itself that gets the blame for being “ineffective.”

First of all, it’s important to prevent the expansive use of the term “crowdsourcing” and keep a clear distinction between crowdsourcing and other communication and problem-solving tools, such as online networking and brainstorming. Equally important is to provide a clear explanation of what crowdsourcing can do for organizations to achieve their strategic innovation objectives.

Let me start with a definition of crowdsourcing – the original proposed by Jeff Howe in 2006 – which I still consider the most comprehensive and precise. Howe defined crowdsourcing as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.”

What is very important in this definition is that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost in our conversations about crowdsourcing.

I believe that there are two major types of “jobs” organizations can outsource using crowdsourcing: adding capacity and accessing expertise.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking a crowd of contributors to perform the whole job by delivering smaller components. Another term for adding capacity is “microtasking,” with Mechanical Turk being the most prominent microtasking marketplace.

Organizations would use adding capacity crowdsourcing when the completion of a job requires the amount of human resources organizations can’t provide on their own. This type of crowdsourcing usually doesn’t require any substantial training of the crowd. However, organizations must provide the members of the crowd with clear directions on how to precisely accomplish the required “mictrotask.” Organizations also must develop a robust protocol of collecting, collating, and interpreting the combined results.

(A more sophisticated version of adding capacity crowdsourcing, a concept of a “flash organization,” has been developed to deal with complex, open-ended tasks that can’t easily be broken into smaller identical parts.)

I define accessing expertise crowdsourcing as a process of exploring the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, and skills from anywhere outside the organization (or anywhere outside a particular function or unit in an organization if we deal with internal crowdsourcing). In my opinion, there is no established academic term for accessing expertise crowdsourcing, although the term “crowdsourced innovation” comes very close.

Accessing expertise crowdsourcing can be further divided into idea generation and problem-solving, which I proposed calling the “bottom-up” and “top-down” crowdsourcing, respectively (and wrote about benefits and drawbacks of both here and here).

Both major types of crowdsourcing, adding capacity and accessing expertise, follow their own rules of engagement which must not be confused if organizations want to use crowdsourcing effectively and efficiently. I’ll cover these rules in more detail in the upcoming posts.

Images provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | Leave a comment

Is your shovel good enough to hit the nail?

Imagine you’re outside and need to hit the nail into the wall to hang a picture. You select the nail of the correct size and then look around for an appropriate hitting tool. You pick up a new, shiny shovel that you recently bought in the nearest Home Depot, aim it at the nail and – bang! – throw a punch. The nail bends in response. You look disapprovingly at the shovel and say: “Such a fancy tool – and not a cheap one, too! – but completely useless.”

Sounds ridiculous, right? But that’s what often happens when people and organizations decide to use a new “shiny” tool without figuring out first how to properly handle it. Facing disappointing results, frustrated operators conclude: “This tool doesn’t work for me/us.”

Crowdsourcing is one of such new shiny tools. In recent years, its popularity has skyrocketed. Unfortunately, along the way, the definition of crowdsourcing has lost its original meaning. It became synonymous with just about every event happening online, especially if this event engages a substantial number of people.

Mixing crowdsourcing with online networking is a frequent mistake – that’s why we have oxymorons like “crowdsourcing on Facebook” (or Twitter, or Yelp).

Another popular mistake is confusing crowdsourcing with brainstorming. In brainstorming, a question is presented to several people who’re asked to come up with answers. As brainstorming session progresses, people propose their own ideas, capitalize on ideas of others, or, perhaps, redefine the question itself. Many folks mistakenly believe that if you replace a group of eight to ten (reportedly the optimal number of people for brainstorming) with a crowd of dozens or even hundreds, you’re not brainstorming but crowdsourcing.

But this is not crowdsourcing. Crowdsourcing is different from brainstorming in one very important aspect: it requires independence of opinions, a feature of crowdsourcing highlighted in the classic James Surowiecki’s book, “The Wisdom of Crowds.” In contrast to brainstorming, during a crowdsourcing campaign, you must make sure that the members of your crowd, either individuals or small teams, provide their input independently of the opinion of others. It’s this aspect of crowdsourcing that results in the delivery of highly diversified, original, and often completely unexpected solutions to the problem -as opposed to brainstorming that almost always ends up with the group reaching a consensus.

Why is important to keep a crisp border between crowdsourcing and other problem-solving tools, such as brainstorming? Because if we want organizations to start using crowdsourcing in their innovation practices, we need to ensure that they know the basic rules of applying this technique.

Take, for example, a 2017 Harvard Business Review article titled “Rethinking Crowdsourcing.” The article described a review of 87 crowdsourcing projects aimed at generating new consumer product ideas. In the course of each project, managers allowed participants to “like” each other submissions – a feature that doesn’t belong in crowdsourcing.

The result? Some contributors began to “like” each other’s ideas so that the apparent value of their respective contributions became overinflated. No wonder that when the submitted proposals were assessed by independent evaluators, no correlation has been found between most “likable” ideas and those that led to successful products.

The conclusion of the article was even more troubling: “It can be unwise to rely on the crowd.” Not an encouraging statement for those who want to start exploring what crowdsourcing can do for their organizations!

I was equally puzzled by another, more recent HBR article, “Research: for crowdsourcing to work, everyone needs an equal voice.” Sure, the authors, two academic researchers, have come up with a correct conclusion: “In order for the wisdom of crowds to retain its accuracy for making predictions, every member of the group must be given an equal voice, without any one person dominating.” Yet, their usage of the generic term “the wisdom of crowds” – while describing a process that mixed both crowdsourcing and brainstorming – made me somewhat uneasy.

It’s impossible to overestimate the role that academic research could play in making crowdsourcing a mainstream problem-solving tool. There are two things that I, a crowdsourcing practitioner, expect from my academic colleagues. First, a solid classification of existing types of crowdsourcing. Second, a clear definition of what crowdsourcing is and what it is not. Muddying the “terminology waters” isn’t helpful.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | Leave a comment

Does “process” kill innovation?

Reading Steve Blank is always a pleasure. Not only is he among the world’s best scholars of corporate innovation; his ability to explain complex things in a simple language is unparallel.

Blank’s recent HBR piece, “Why Companies Do ‘Innovation Theater’ Instead of Actual Innovation,” is no exception. He persuasively argues that as large organizations face continuous disruption, their ability to innovate is no more an “add-on”; it’s their way to survival. And yet, they consistently fail to innovate.

The reason, as Blank sees it, is that while transforming from ambitious startups bubbling with innovative ideas into mature commercial entities, organizations build “processes.” Although processes diminish the overall risk for an organization to malfunction, each layer of the process reduces the organizational agility and its responsiveness to new threats and opportunities. Eventually, the organizations begin to value “process” over the “product” – and that kills innovation.

At this point, corporate innovation becomes “innovation theater,” a set of “activities” that may build and shape culture but fail to come up with viable products. (This idea is very close to my heart: back in 2015, I fretted that organizations were faking innovation.)

While Blank’s explanation of what is wrong with innovation is right on point, as usual, I was surprised by his uncharacteristic reluctance to propose ways to address the problem. Sure, Blank argues that innovation activities and processes should be part of an overall plan, and his idea of an Innovation Doctrine is an intriguing one, however vaguely articulated.

At the same time, I’d disagree with Blank that processes as such hurt innovation. In my opinion, corporate innovation suffers not from the overabundance of processes but, quite to the contrary, from the paucity of them. We still don’t have a sustainable process to handle the proverbial “innovation funnel,” to move promising inventions and discoveries all the way from the front end of innovation to its back end.

That’s what we need to focus on. And we must hurry up, as the United States is losing its place at the top of the global innovation indexes. There is no time to waste.

 The image created by Tatiana Ivanov

Posted in Innovation | Tagged , , , , | Leave a comment

Are We Heading For Crowdsourced VR Health Care?

Every now and then you see a headline just like the one overhead, indicating that virtual reality is moving into some unexpected new industry or enterprise. It can all be a little bit dizzying, even if you’ve been following VR since its humble modern-era beginnings as a concept from Oculus demos. But what’s really interesting – particularly with regard to health care, one might argue – is exploring how a simple VR application in an unexpected area could, in theory, evolve or branch out. Here are a few examples of what I mean.

VR in Museums – Lots of museums offer virtual tours online, through which you can click through galleries and exhibitions. But now, venues as prestigious as the British Museum in London are partnering with VR companies to design full-fledged VR tours as well. It’s a whole new way for people to explore remotely. But think of the implications for tourism more broadly. Could more experiences like these lead to full VR city walking tours, incorporating multiple attractions at once?

VR in Casinos – Various components of casinos have been reimagined in VR. Naturally, a few simple poker experiences led the way. We’ve since seen some of the popular free slots displayed at international SlotSource platforms adapted to VR as well. But what if these slots, poker games, and other casino experiences weren’t one-off VR games? Could this lead to entire virtual casinos within which gamers could mingle and stop off at games of their choice?

VR in Cars – Racing was one of the early genres to really reach the potential of VR gaming. As a result, there are plenty of different VR driving experiences. Some are more realistic than others, but what if developers focused more on the realistic? Could VR driving be used to test young drivers? Or help people test drive cars they might want to buy in rapid succession? Or even help city planners to do practice runs of new traffic patterns?

The examples could carry on, but you get the idea. An individual VR experience that works well can hint easily at a whole category ripe for development. And that logic, when applied to VR in health care – and specifically diagnostics – is fascinating.

VR as a diagnostic tool has been buzzed about for years now, actually, even dating back to the days before the technology’s commercial availability. As VR has become better known though, this idea has taken clearer shape. Earlier this year, Wired did a relatively brief but helpful look at VR’s applications in diagnosing mental diseases (and in some cases treating certain conditions, like PTSD). The thinking, right now, is that through careful VR analysis, medical professionals can accurately determine what may be ailing a given patient.

Expand this concept beyond mental illness, however, and imagine applications with social components, and you can begin to see how the idea – like those examples listed above – could branch out significantly. VR examination apps with diagnostic components and social capabilities, specifically, would allow patients to broadcast their own injuries and ailments to people remotely, in order to receive diagnostic opinions. Ideally, that would mean physicians, but it’s highly possible it could mean other things too: friends, family, medical communities online, or even social network groups dedicated to various medical purposes.

In short, while it may seem like a stretch now, there’s a certain logic to the idea of near-future crowdsourced diagnostics in VR.

Posted in Uncategorized | Leave a comment