Detecting cancer in a-intelligent way

Good news from the front lines of the War on Cancer. The American Cancer Society reported the sharpest drop in cancer death rates in the United States between 2016 and 2017. The 2.2% drop, the biggest single-year drop on record, seems to be driven by accelerating declines in mortality from lung cancer, the leading cause of cancer death in the U.S.

The decline is attributed to two major causes: reduction in smoking, the biggest risk factor for lung cancer, and the development of new cancer treatments.

The ACS report also touches upon one of the bumpiest corners of the cancer field: cancer screening. Many diagnostic tests do identify cancers early when treatment is usually more effective. But they also identify growths that would never turn deadly – a phenomenon called “overdiagnosis.”

Many experts hope that further improvements in the efficiency of cancer screening can be achieved using advances in Artificial Intelligence. Two recent studies lend credit to this hope.

In the first study, a joint team of British and U.S. researchers trained an AI system to identify breast cancers using a set of ~29,000 mammograms. The authors of the study then showed that AI was able to identify cancers at least with a similar level of accuracy – and even higher in a separate experiment – as expert radiologists. Moreover, using the AI system allowed to reduce, by a few percent, the number of false-positive and false-negative results.

In the second study, conducted by a diversified team of U.S. scientists and clinicians, an AI system was fed with 2.5 million labeled samples of brain cancer biopsies and taught to identify 13 different types of brain cancer. In a side-by-side comparison, the AI system needed less than 150 seconds to make a diagnosis (as compared to about 30 minutes for a neuropathologist), with the accuracy being at least equal to that of a human tester (94.6% and 93.9%, respectively). Interestingly, the human testers were able to correctly identify samples AI could not and, vice versa, AI was able to identify samples the human testers diagnosed incorrectly.

Both studies imply that combining human testing with AI-assisted one can dramatically speed up the process of cancer screening while at the very minimum preserving the accuracy of testing.

Do we have bad news? Yes, we do. It will take years until the AI diagnostic systems become a mainstream of medical practice as large-scale clinical trials are required to get the green light from regulatory authorities.

Image: https://learningenglish.voanews.com/a/google-ai-system-could-improve-breast-cancer-detection/5231018.html

Posted in AI, Health Care, Innovation | Tagged , , , , , | 1 Comment

Computational propaganda: another dark corner of the net

Sir Tim Berners-Lee has all the reason to be proud of his life’s crown achievement: the World Wide Web. But he is not. In a series of interviews last fall, Berners-Lee complained that the internet today isn’t what he imagined almost 30 years ago when he invented it. In a long list of specific concerns, Berners-Lee mentioned the pervasiveness of ads, privacy breaches, hate speech, and fake news.

A recent report documents the rise and rapid maturation of yet another troublesome net tool: organized social media manipulation or computational propaganda, in the words of the report’s authors, Samantha Bradshaw and Philip N. Howard of the University of Oxford.

Bradshaw and Howard argue that computational propaganda, which they define as “the use of algorithms, automation, and big data to shape public life,” is becoming a pervasive and ubiquitous part of everyday life. Its presence can be spotted in 70 countries, up from 48 countries in 2018 and 28 countries in 2017.

The most troubling techniques of computational propaganda include the use of “political bots” to amplify hate speech or other forms of manipulated content, the illegal collection of data and micro-targeting, and deploying of trolls to bully or harass political opponents or journalists. Seven countries – China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela – have also used computational propaganda to influence political events in foreign countries.

Even nations not known for the aggressive weaponizing of the net take advantage of some “mild” forms of computational propaganda. For example, in Germany and Sweden, political parties and/or non-government entities use social media manipulation to advance political and social causes.

Facebook remains the most popular platform for social media manipulation, with some evidence of computational propaganda campaigns on Facebook found in 56 countries. At the same time, the increased use of YouTube, Instagram, and WhatsApp has also been reported.

The usage of human-operated social media accounts is still the most popular way of conducting computational propaganda: 60 out of the 70 countries use them. Bot accounts come next, with 50 out of 70 countries employing bot accounts. Of special concerns is the use of stolen or hacked accounts to conduct social media manipulation campaigns. Five countries – Guatemala, Iran, North Korea, Russia, and Uzbekistan – have been marked for this type of behavior.

In the United States, social media manipulation is actively used by government agencies, private contractors, and to a lesser extent, political parties. Human, bot, and cyborg (a blend of automation with human curation) accounts are being employed. There is no evidence of the United States’ use of computational propaganda in foreign countries; however, it was reported that a fake social network in Cuba had been created by the USAID.

The current spread of computational propaganda, as troubled as it already appears, is obviously just a beginning. Its further growth and maturation will be augmented by new technologies, such as AI, VR, and IoT. Eventually, countries and societies will have to deal with this phenomenon. Unfortunately, there are no signs that it’ll happen any time soon.

Image: https://comprop.oii.ox.ac.uk/research/cybertroops2019/

Posted in Internet | Tagged , , , , , , , , , , , , , | Leave a comment

Don’t blame crowdsourcing for “bad ideas”

As I mentioned a couple of years ago, I try to follow what academic researchers write about crowdsourcing. As a crowdsourcing practitioner, I welcome the clarity, holistic approach, and intellectual vigor academic research brings to the table. On occasion, however, I come across a paper that instead of clarifying the crowdsourcing water, muddies it.

Unfortunately, a recent HBR article (“Why Crowdsourcing Often Leads to Bad Ideas”) by Oguz A. Acar of the University of London’s Cass Business School falls in the latter category. Prof. Acar complains – and not unreasonably, I must add – that “most crowdsourcing initiatives end up with an overwhelming amount of useless ideas.” Prof. Acar believes he has identified a reason for this tsunami of “bad ideas”: our inattention to what motivates the crowd producing these ideas.

To make his point, Prof. Acar studied the motivation of the crowd members of InnoCentive, a popular crowdsourcing platform. He found that top-quality solutions usually come from crowd members driven by intrinsic and extrinsic motivation, whereas learning and social motivation have no positive effects on the quality of solutions. Prof. Acar then recommends that in order to improve the quality of submitted ideas, crowdsourcing campaigns should be designed in a way that would encourage the participation of crowd members with intrinsic and extrinsic motivations.

This is where I agree with Prof. Acar: crowd motivation does matter. I wrote about that back in 2014, and the study that Doug Williams and I conducted the same year added some field data to this conclusion.

My problem with Prof. Acar’s reasoning is that he seems to ignore the fact that the term “crowdsourcing” may mean many different things – and one mustn’t confuse them. In a nutshell, crowdsourcing consists of two major types of activities: adding capacity (“microtasking”) and accessing expertise (“crowdsourced innovation”). The second type, accessing expertise crowdsourcing, can be further divided into idea generation and problem-solving, which I propose calling the “bottom-up” and “top-down” crowdsourcing, respectively (and wrote about benefits and drawbacks of both here and here).

It’s the idea generation (“bottom-up”) version of crowdsourcing that routinely produces a large number of useless ideas. But InnoCentive, the platform Prof. Acar invoked, is the problem-solving (“top-down”), not idea generation platform. Learning what motivates people trying to solve precisely defined technical and business problems and applying this knowledge to the motivation of folks asked to generate vaguely defined “ideas” doesn’t make much sense.

But Prof. Acar’s arguments are also misleading for another, much more important reason. As other academic researchers repeatedly pointed out (see, for example, here) – and most crowdsourcing practitioners would confirm – the key factor defining the ultimate success or failure of a crowdsourcing campaign is not a crowd (its size, composition, motivation, etc.), but the question this crowd is presented with.

I call it the “80:20 rule”: 80% of unsuccessful crowdsourcing campaigns I’m aware of have failed because of the inability to properly formulate the question to be presented to a crowd; only 20% have done so because of the poor performance of the crowd.

Crowdsourcing campaigns mentioned by Prof. Acar generate a lot of useless ideas not because the crowd was badly motivated, but because the parameters of “ideas” were poorly defined by the campaign managers (“crowdmasters”). So, my recommendation to them would be this: master first the art and science of formulating a question that you’ll ask your crowd—by both properly defining the problem and describing a “perfect” solution to it.

And if the outcome of the crowdsourcing campaign falls below your expectations, don’t automatically blame the crowd – or crowdsourcing in general. Start with looking closer to yourself.

Posted in Crowdsourcing | Tagged , , , , , , , , | 1 Comment

Innovation and inequality

High-tech innovation has been a powerful driver of the U.S. economy – and as such can take full credit for the country’s prosperity since World War II. Yet, as a recent report by the Brookings Institution suggests, it has also led to what the report calls a “crisis of regional imbalance.”

The report draws attention to the fact that five top innovation metro areas – Boston, San Francisco, San Jose, Seattle, and San Diego – accounted for more than 90% of the nation’s innovation-sector growth between 2005 and 2017.

Over the same period, these metro areas increased their share of the U.S. total innovation employment from 18% to 23% – all at the expense of the bottom 90% of metro areas. As a result, fully one-third of the nation’s high-paying innovation jobs now reside in just 16 counties (and more than half in 41 counties), mostly on the West Coast and in Northeast.

Such an excessive concentration of tech (and wealth) has serious negative consequences. If for superstar hubs, that would “only” mean skyrocketing home prices and traffic gridlock, for the areas at the bottom, the situation is much worse. Deprived of the top tech talent and investment, whole portions of the country are falling into traps of perennial technological and economic underdevelopment.

Economic inequality inevitably raises social justice issues. Political backlash follows, as became apparent during the 2016 presidential election.

The report argues that markets alone won’t solve the problem and that a new nation-wide program is needed. More specifically, the report proposes creating eight to 10 new regional “growth centers” across the heartland to catalyze innovation-driven economic development in these areas.

The report estimates that the cost of such a program for the federal government would be on the order of $100 billion over 10 years. (For comparison, this is less than the cost of U.S. fossil fuel subsidies).

The report finally picks up 35 metro areas in 19 states (the Great Lakes, Upper South, and Intermountain West areas) as possible candidates for growth center designations.

A recent report by the Council on Foreign Relations, a think tank specializing in U.S. foreign policy and international affairs, highlighted the crucial role innovation plays for the American national security (I wrote about it here). Like the Brookings’ report, the CFR report calls for a national security innovation strategy to ensure the U.S. leadership in foundational and emerging technologies over the next 20 years.

Innovation needs a serious conversation in Washington, DC. Unfortunately, this is not a conversation we’re having now. Worse, given the political realities, this is a conversation we’re not likely to have any time soon.

Image: https://www.brookings.edu/research/growth-centers-how-to-spread-tech-innovation-across-america/?utm_source=morning_brew

Posted in Global Innovation | Tagged , , , , , | Leave a comment

If not Google, then who?

Is Jeff Bezos upset with the U.S Department of Defense’s decision to award a lucrative $10 billion contract not to Amazon but to Microsoft instead? You bet. But he still firmly believes that U.S. tech companies must work with the Pentagon.

Addressing the annual Reagan National Defense Forum in Simi Valley, California, Bezos said, “If big tech is going to turn their backs on the Department of Defense, this country is in trouble.”

Bezos’ comment calls to mind the controversy surrounding the DoD’s contract with Google (dubbed Project Maven) to analyze drone videos using AI. Following Google’s announcement of the partnership with the DoD, in March 2018, more than 3,000 of Google employees signed a letter to Sundar Pichai, the company’s CEO, demanding that Google pull out of Project Maven.

“We believe that Google should not be in the business of war,” said the letter. Google decided to not renew the contract upon its expiry in March 2019.

I do admire Google employee’s willingness to speak their minds on this and other controversial topics. I certainly respect their stand specifically on Project Maven. And yet, to fully understand their position with regards to the “business of war,” I’d love to ask them a few questions.

Do the Google employees who signed the Project Maven letter believe that their colleagues in China, Russia, Iran, and North Korea will reciprocate and abandon working on the military applications of AI? Do the Googlers think that the United States should develop proper defense from AI-driven attacks launched by its enemies? If the answer to the prior question is yes, then who, in the Googlers’ opinion, should be conducting research aimed at this goal? And why specifically Google should be excluded from this research?

A recent report by the Council on Foreign Relations, a think tank specializing in U.S. foreign policy and international affairs, highlighted the crucial role innovation plays for the American national security (I wrote about it here). The report specifically mentioned China as a formidable strategic competitor challenging the U.S. leadership in the area of AI and data science.

The just-released Global AI Index confirms the warning issued by the CFR report. While the United States still leads the pack of 54 countries included in the Index, China comes a close second. Characteristically, China scores #1 rank in the sub-section “Government Strategy” that focuses on the depth of commitment from national governments to AI (in terms of strategy and spending). The United States scores only #13 in this specific category.

The United States urgently needs a strategy guiding the AI-related R&D efforts, including in the area of national security and defense. Part of this strategy should identify appropriate entities charged with leading the R&D efforts. And if it’s not Google, then who?

 Image: https://en.wikipedia.org/wiki/File:Googlelogo.png

Posted in Global Innovation, Innovation | Tagged , , , , , , , , , , | Leave a comment

What Can Crowds Do?

Since the 2004 publication of James Surowiecki’s highly influential book, The Wisdom of Crowds, the idea that large groups of people are smarter than a few individuals, no matter how brilliant, has been gradually gaining prominence in academic circles, business communities and, most importantly, public opinion.

Crowdsourcing is one of the practical applications of this idea. Numerous organizations, including corporations, governmental agencies, and nonprofits, are now using crowdsourcing as a problem-solving, product development, operational improvement, and marketing tool. Crowdsourcing has also been successfully applied to public policymaking: from writing state constitutions to creating “smart cities.”

Other approaches to engage crowds in important socioeconomic activities also exist. One of them is crowdfunding, something that crowdsourcing is often confused with. Although the idea of raising money from the public (i.e., for charitable causes or disaster relief) isn’t new, the invention of online crowdfunding platforms, such as Kickstarter and Indiegogo,  has made this process more streamlined and cost-effective. Equally important, crowdfunding has democratized the process of raising capital to start new businesses or to launch new products. Crowdfunding is so effective because it allows entrepreneurs to present their cases to larger audiences of potentially interested parties, in addition to a limited number of professional investors.  

 A few years ago, a group of New York City-based entrepreneurs proposed an interesting derivative of crowdfunding, crowdraising, an approach that allows crowds to pledge time instead of money to support causes or projects they care about. Any organization with a worthy goal would be capable of using crowdraising to hire a crowd to perform business-related activities. These activities could be as simple as taking part in a survey, conducting beta testing, or giving feedback. But they could also involve more complex tasks, such as coding, design work, or strategic advice. After completing their work on the project, the members of the crowd would be expected to be rewarded: from an honorable mention or a free product for simpler tasks to cash or equity for more complex activities.

As far as I know, the concept of crowdraising has never been realized in practice. However, I consider crowdraising a promising idea with the potential to create a new paradigm of finding and hiring employees in the gig economy. Taken together with new ways of problem-solving (provided by crowdsourcing) and raising money (provided by crowdfunding), all three approaches may profoundly shape the future of work.

And there is something else I strongly believe in: new ways of capitalizing on the wisdom of crowds will emerge.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Global Innovation | Tagged , , , , , , , , , | Leave a comment

Innovation and U.S. National Security

The important role innovation plays in economic growth and prosperity of the world’s nations is well documented. A recent report by the Council on Foreign Relations, a think tank specializing in U.S. foreign policy and international affairs, highlights the crucial role played by innovation in the area that doesn’t normally attract public attention: the American national security.

Composed by a diverse group of 20 experts and titled “Innovation and Security. Keeping Our Edge,” the report argues that after leading the world in technological innovation for the past three-quarters of a century, the United States is now at risk of falling behind its competitors. This may have profound negative consequences for U.S. national security.

The report points to the following troubling trends in the U.S. innovation policies:

  • Federal investment in R&D as a percentage of the GDP is declining: from a peak of above 2% in the 1970s to about 1% in 2001 to 0.7% in 2018. In 2015, for the first time since World War II, the federal government provided less than half of all funding for basic research.
  • U.S. current trade policies needlessly alienate the country’s long-term partners, resulting in rising costs for American tech firms and impeding the adoption of U.S. technology in foreign markets.
  • A lack of strong educational initiatives at home has hurt the development of domestic STEM talent. At the same time, new immigration barriers diminish the country’s ability to attract highly educated foreigners. The number of new international students enrolling at American institutions fell by 6.6% during the 2017-2018 academic year. Further limiting the number of H-1B visas has hampered tech firms that rely on top global talent to staff their operations.
  • A persistent divide between the technology and policymaking communities makes it more difficult for the Department of Defense and intelligence community to acquire advanced technologies from the private sector and to draw on technical talent.
  • China has become a formidable strategic competitor challenging the U.S. leadership in a range of emerging technologies, such as AI and data science, advanced battery storage, advanced semiconductor technologies, 5G, quantum computing, robotics, genomics, and synthetic biology.

The report’s major recommendations are:

  • Restore federal funding for R&D to its historic average, from 0.7% to 1.1% of GDP (or from $146 billion to $230 billion in 2018 dollars).
  • Make an additional strategic investment in universities to the tune of $20 billion a year of federal and state monies for five years.
  • Adopt moonshot approaches to society-wide national security problems that would support innovation in the key emerging technologies mentioned above. Encourage and support American startups working in this space.
  • Make it easy for foreign graduates of U.S. universities in scientific and technical fields to remain and work in the United States. Automatically grant lawful permanent residence (“green card”) to those who earn a STEM master’s or doctorate degrees.
  • While continuing to confront China on cyber espionage and IP theft, stop over-weaponizing trade policy. The best way to answer the China challenge is to compete more effectively. (“Slowing China down is not as effective as outpacing it.”)

The report makes it very clear that the United States urgently needs a national security innovation strategy to ensure its leadership in foundational and emerging technologies over the next 20 years. Actions are needed over the next five years. Although not saying this explicitly, the report leaves no doubt that the consequences of inaction will be dire.

 Image: https://www.cfr.org/blog/keeping-our-edge-overview-innovation-and-national-security-task-force-report

Posted in Global Innovation, Innovation | Tagged , , , , , , , , , , , , , , , , , | 4 Comments

Being an expert: traveling the same road again and again

There are several reasons for the slow adoption of crowdsourcing as a practical problem-solving tool.

One of them is the lack of trust in the intellectual power of the crowd, its ability to tackle complex problems. Almost everyone would agree that the proverbial wisdom of crowds can be applied to a “simple” task, such as creating a corporate logo or naming a city landmark. However, when it comes to answering a question that requires specialized knowledge, organizations prefer to turn to experts.

This preference obviously sits well with the experts themselves. They’re often scornful of the idea that someone with no immediate experience in the field can solve a problem that they could not. This sentiment was eloquently summarized in a 2010 article: “Our trust in the expert appears to be increasingly supplanted by a willingness to rely on the knowledge derived from crowds of amateurs.”

“Crowds of amateurs.” Harsh words, huh?

Pitting experts against crowds is plain silly. Experts represent an essential part of any crowdsourcing campaign; in fact, crowdsourcing is impossible without experts. Only experts can identify and properly formulate problems facing organizations. Only experts can properly evaluate incoming external submissions to select those that make sense. Only experts can successfully integrate external information with the knowledge available in-house. It’s only at this midpoint of the problem-solving process – at the stage of generating potential solutions to the problem – that crowds are usually superior to experts.

Why? A recent study in the field of neurobiology provides useful insight. A team of scientists from Cold Spring Harbor Laboratory led by Dr. Anne Churchland analyzed neuronal activity in the brains of mice forced to learn new decision-making skills.

As the mice progressed through learning new tricks, more and more neurons in their brains got involved. However, the neuron activity rapidly became very selective: the neurons only responded when the mice made one choice and not another. This pattern became even stronger as the mice learned how to do a task better (i.e., became an “expert” in this task). Moreover, when the expertise was fully achieved, the mouse’s brain was ready for that expert decision even before the mouse began executing the task.

In other words, the “expert” mice know how to solve the problem even before starting to solve it!

In contrast, the neuronal activities in the brains of “non-expert” mice remain non-selective – meaning that the mice would approach the task with an “open mind.”

Were these findings held for humans, the implication would be that experts approach the problem with the patterns that are already pre-formed in their brains by their prior experience. In contrast, amateurs may approach the same problem from a completely different angle – and the more amateurs are involving in solving the problem, the more chance that a completely novel, unorthodox solution could be found.

That means that when solving a problem requires prior experience (e.g., when solving a similar problem as in the past), organizations should engage experts. However, if the problem is novel and may require a fresh look at it, crowds would be a better choice.

There is no sense to discuss which tool, experts or crowds, is better. They are different, complementary tools in the innovation management toolbox. Each should be used at its proper time and place.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | 1 Comment

“Fail often” but not too often

“Failing fast and often” has become an innovation mantra. Of course, not everyone takes this wisdom at face value. Even more tellingly, no one has taken the trouble to explain what “fast” and “often” precisely mean when applied to failure.

Now, some scientific data seems to have emerged, thanks to the team led by Robert C. Wilson from the University of Arizona, Tucson. Dr. Wilson and his colleagues examined the role of difficulty of training on the rate of learning. They found that the rate of learning is maximized when the difficulty of training is adjusted to an optimal level. They further found that the maximum learning takes place when the optimal training accuracy (a measure of difficulty) is about 84% or, conversely, when the optimal rate of training error is around 16%. In other words, one should be five times more right than wrong to learn successfully.

Sure, I understand the difference between innovation and the learning process “in case of binary classification tasks and stochastic gradient-descent-based learning rules” studied by Dr. Wilson’s team. Sure, I understand that innovation is a lot of experimentation, and experimentation implies a lot of failures.

What I don’t understand is our obsession with “failure,” with treating it as an end, not a means, of the innovation process. (And I definitely refuse to celebrate failures.) What I don’t understand is our willingness to replace the data-driven innovation discovery with a primitive A/B testing.

In order to succeed in innovation, we need a few things preceding experimentation. We need innovation strategy; we need innovation processes; we need innovation metrics, training, and incentives. That what will make our experimentation more efficient and repeatable than winning in a lottery.

Image is taken from the article by Wilson et al. (2019)

Posted in Innovation | Tagged , , , , | Leave a comment

Crowdsourcing: two approaches, two objectives

In my previous post, I reminded the original definition of crowdsourcing by Jeff Howe: “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” I emphasized that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost.

I further outlined two major jobs that can be outsourced via crowdsourcing, adding capacity and accessing expertise, and gave definitions of both. Some of the readers have asked me to elaborate on the difference between the two approaches to crowdsourcing. Here is what I came up with.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking the crowd to deliver these small pieces. The members of the crowd usually don’t need any special training to perform the job. However, it’s the responsibility of the project sponsor to provide the crowd with a clear direction on how each piece of the job should be completed. It’s also the sponsor’s responsibility to design a protocol for assembling the whole job from its sub-components.

Organizations use the adding capacity crowdsourcing when the desired job requires the amount of resources organizations don’t have. Take, for example, the Common Voice project by Mozilla. Common Voice is a dataset that consists of about 1,400 hours of recorded human voice samples from more than 42,000 contributors in 18 different languages. Obviously, Mozilla couldn’t have composed such a dataset using only its own 1,200 employees.

The very objective of the adding capacity crowdsourcing poses a requirement with regards to the size of the crowd. In most cases, the larger crowd for adding capacity, the better. For example, adding additional contributors to the Common Voice project would have allowed Mozilla to expand the dataset, both in terms of recorded hours of speech and the number of covered languages.

I define accessing expertise as the extraction of the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, experience, and skills originating anywhere outside an organization. (In case of internal crowdsourcing, the accessed expertise will originate anywhere within the organization, but outside the unit that is sponsoring the crowdsourcing project.)

Organizations use the accessing expertise crowdsourcing when they want to solve a problem, the problem that prevents the organization from achieving an important objective like designing a new product, completing a project, or optimizing performance. When launching an accessing expertise crowdsourcing campaign, the campaign sponsor must clearly define the problem and explicitly outline the requirements all successful solutions are expected to meet.

The members of the crowd should possess certain knowledge, expertise, and skills to be able to solve the problem – and the more complex the problem, the more experienced the members of the crowd should be.

Moreover, many complex technical and business problems require completely novel, unexpected, and even unorthodox solutions – meaning that the pool of incoming contributions should include many different ways of solving the problem. This objective of the accessing expertise crowdsourcing poses a specific, unique for this approach, requirement for the crowd: it must be very diverse to provide the needed diversity of the incoming solutions. On the other hand, the crowd size by itself is, perhaps, a secondary consideration for accessing expertise crowdsourcing but larger crowds are usually more diverse.

Understanding the difference between the two approaches to crowdsourcing – and the rules they are governed by – is very important because the lack of such understanding is a frequent cause of failure of crowdsourcing campaigns.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | Leave a comment