If not Google, then who?

Is Jeff Bezos upset with the U.S Department of Defense’s decision to award a lucrative $10 billion contract not to Amazon but to Microsoft instead? You bet. But he still firmly believes that U.S. tech companies must work with the Pentagon.

Addressing the annual Reagan National Defense Forum in Simi Valley, California, Bezos said, “If big tech is going to turn their backs on the Department of Defense, this country is in trouble.”

Bezos’ comment calls to mind the controversy surrounding the DoD’s contract with Google (dubbed Project Maven) to analyze drone videos using AI. Following Google’s announcement of the partnership with the DoD, in March 2018, more than 3,000 of Google employees signed a letter to Sundar Pichai, the company’s CEO, demanding that Google pull out of Project Maven.

“We believe that Google should not be in the business of war,” said the letter. Google decided to not renew the contract upon its expiry in March 2019.

I do admire Google employee’s willingness to speak their minds on this and other controversial topics. I certainly respect their stand specifically on Project Maven. And yet, to fully understand their position with regards to the “business of war,” I’d love to ask them a few questions.

Do the Google employees who signed the Project Maven letter believe that their colleagues in China, Russia, Iran, and North Korea will reciprocate and abandon working on the military applications of AI? Do the Googlers think that the United States should develop proper defense from AI-driven attacks launched by its enemies? If the answer to the prior question is yes, then who, in the Googlers’ opinion, should be conducting research aimed at this goal? And why specifically Google should be excluded from this research?

A recent report by the Council on Foreign Relations, a think tank specializing in U.S. foreign policy and international affairs, highlighted the crucial role innovation plays for the American national security (I wrote about it here). The report specifically mentioned China as a formidable strategic competitor challenging the U.S. leadership in the area of AI and data science.

The just-released Global AI Index confirms the warning issued by the CFR report. While the United States still leads the pack of 54 countries included in the Index, China comes a close second. Characteristically, China scores #1 rank in the sub-section “Government Strategy” that focuses on the depth of commitment from national governments to AI (in terms of strategy and spending). The United States scores only #13 in this specific category.

The United States urgently needs a strategy guiding the AI-related R&D efforts, including in the area of national security and defense. Part of this strategy should identify appropriate entities charged with leading the R&D efforts. And if it’s not Google, then who?

 Image: https://en.wikipedia.org/wiki/File:Googlelogo.png

Posted in Global Innovation, Innovation | Tagged , , , , , , , , , , | Leave a comment

What Can Crowds Do?

Since the 2004 publication of James Surowiecki’s highly influential book, The Wisdom of Crowds, the idea that large groups of people are smarter than a few individuals, no matter how brilliant, has been gradually gaining prominence in academic circles, business communities and, most importantly, public opinion.

Crowdsourcing is one of the practical applications of this idea. Numerous organizations, including corporations, governmental agencies, and nonprofits, are now using crowdsourcing as a problem-solving, product development, operational improvement, and marketing tool. Crowdsourcing has also been successfully applied to public policymaking: from writing state constitutions to creating “smart cities.”

Other approaches to engage crowds in important socioeconomic activities also exist. One of them is crowdfunding, something that crowdsourcing is often confused with. Although the idea of raising money from the public (i.e., for charitable causes or disaster relief) isn’t new, the invention of online crowdfunding platforms, such as Kickstarter and Indiegogo,  has made this process more streamlined and cost-effective. Equally important, crowdfunding has democratized the process of raising capital to start new businesses or to launch new products. Crowdfunding is so effective because it allows entrepreneurs to present their cases to larger audiences of potentially interested parties, in addition to a limited number of professional investors.  

 A few years ago, a group of New York City-based entrepreneurs proposed an interesting derivative of crowdfunding, crowdraising, an approach that allows crowds to pledge time instead of money to support causes or projects they care about. Any organization with a worthy goal would be capable of using crowdraising to hire a crowd to perform business-related activities. These activities could be as simple as taking part in a survey, conducting beta testing, or giving feedback. But they could also involve more complex tasks, such as coding, design work, or strategic advice. After completing their work on the project, the members of the crowd would be expected to be rewarded: from an honorable mention or a free product for simpler tasks to cash or equity for more complex activities.

As far as I know, the concept of crowdraising has never been realized in practice. However, I consider crowdraising a promising idea with the potential to create a new paradigm of finding and hiring employees in the gig economy. Taken together with new ways of problem-solving (provided by crowdsourcing) and raising money (provided by crowdfunding), all three approaches may profoundly shape the future of work.

And there is something else I strongly believe in: new ways of capitalizing on the wisdom of crowds will emerge.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Global Innovation | Tagged , , , , , , , , , | Leave a comment

Innovation and U.S. National Security

The important role innovation plays in economic growth and prosperity of the world’s nations is well documented. A recent report by the Council on Foreign Relations, a think tank specializing in U.S. foreign policy and international affairs, highlights the crucial role played by innovation in the area that doesn’t normally attract public attention: the American national security.

Composed by a diverse group of 20 experts and titled “Innovation and Security. Keeping Our Edge,” the report argues that after leading the world in technological innovation for the past three-quarters of a century, the United States is now at risk of falling behind its competitors. This may have profound negative consequences for U.S. national security.

The report points to the following troubling trends in the U.S. innovation policies:

  • Federal investment in R&D as a percentage of the GDP is declining: from a peak of above 2% in the 1970s to about 1% in 2001 to 0.7% in 2018. In 2015, for the first time since World War II, the federal government provided less than half of all funding for basic research.
  • U.S. current trade policies needlessly alienate the country’s long-term partners, resulting in rising costs for American tech firms and impeding the adoption of U.S. technology in foreign markets.
  • A lack of strong educational initiatives at home has hurt the development of domestic STEM talent. At the same time, new immigration barriers diminish the country’s ability to attract highly educated foreigners. The number of new international students enrolling at American institutions fell by 6.6% during the 2017-2018 academic year. Further limiting the number of H-1B visas has hampered tech firms that rely on top global talent to staff their operations.
  • A persistent divide between the technology and policymaking communities makes it more difficult for the Department of Defense and intelligence community to acquire advanced technologies from the private sector and to draw on technical talent.
  • China has become a formidable strategic competitor challenging the U.S. leadership in a range of emerging technologies, such as AI and data science, advanced battery storage, advanced semiconductor technologies, 5G, quantum computing, robotics, genomics, and synthetic biology.

The report’s major recommendations are:

  • Restore federal funding for R&D to its historic average, from 0.7% to 1.1% of GDP (or from $146 billion to $230 billion in 2018 dollars).
  • Make an additional strategic investment in universities to the tune of $20 billion a year of federal and state monies for five years.
  • Adopt moonshot approaches to society-wide national security problems that would support innovation in the key emerging technologies mentioned above. Encourage and support American startups working in this space.
  • Make it easy for foreign graduates of U.S. universities in scientific and technical fields to remain and work in the United States. Automatically grant lawful permanent residence (“green card”) to those who earn a STEM master’s or doctorate degrees.
  • While continuing to confront China on cyber espionage and IP theft, stop over-weaponizing trade policy. The best way to answer the China challenge is to compete more effectively. (“Slowing China down is not as effective as outpacing it.”)

The report makes it very clear that the United States urgently needs a national security innovation strategy to ensure its leadership in foundational and emerging technologies over the next 20 years. Actions are needed over the next five years. Although not saying this explicitly, the report leaves no doubt that the consequences of inaction will be dire.

 Image: https://www.cfr.org/blog/keeping-our-edge-overview-innovation-and-national-security-task-force-report

Posted in Global Innovation, Innovation | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Being an expert: traveling the same road again and again

There are several reasons for the slow adoption of crowdsourcing as a practical problem-solving tool.

One of them is the lack of trust in the intellectual power of the crowd, its ability to tackle complex problems. Almost everyone would agree that the proverbial wisdom of crowds can be applied to a “simple” task, such as creating a corporate logo or naming a city landmark. However, when it comes to answering a question that requires specialized knowledge, organizations prefer to turn to experts.

This preference obviously sits well with the experts themselves. They’re often scornful of the idea that someone with no immediate experience in the field can solve a problem that they could not. This sentiment was eloquently summarized in a 2010 article: “Our trust in the expert appears to be increasingly supplanted by a willingness to rely on the knowledge derived from crowds of amateurs.”

“Crowds of amateurs.” Harsh words, huh?

Pitting experts against crowds is plain silly. Experts represent an essential part of any crowdsourcing campaign; in fact, crowdsourcing is impossible without experts. Only experts can identify and properly formulate problems facing organizations. Only experts can properly evaluate incoming external submissions to select those that make sense. Only experts can successfully integrate external information with the knowledge available in-house. It’s only at this midpoint of the problem-solving process – at the stage of generating potential solutions to the problem – that crowds are usually superior to experts.

Why? A recent study in the field of neurobiology provides useful insight. A team of scientists from Cold Spring Harbor Laboratory led by Dr. Anne Churchland analyzed neuronal activity in the brains of mice forced to learn new decision-making skills.

As the mice progressed through learning new tricks, more and more neurons in their brains got involved. However, the neuron activity rapidly became very selective: the neurons only responded when the mice made one choice and not another. This pattern became even stronger as the mice learned how to do a task better (i.e., became an “expert” in this task). Moreover, when the expertise was fully achieved, the mouse’s brain was ready for that expert decision even before the mouse began executing the task.

In other words, the “expert” mice know how to solve the problem even before starting to solve it!

In contrast, the neuronal activities in the brains of “non-expert” mice remain non-selective – meaning that the mice would approach the task with an “open mind.”

Were these findings held for humans, the implication would be that experts approach the problem with the patterns that are already pre-formed in their brains by their prior experience. In contrast, amateurs may approach the same problem from a completely different angle – and the more amateurs are involving in solving the problem, the more chance that a completely novel, unorthodox solution could be found.

That means that when solving a problem requires prior experience (e.g., when solving a similar problem as in the past), organizations should engage experts. However, if the problem is novel and may require a fresh look at it, crowds would be a better choice.

There is no sense to discuss which tool, experts or crowds, is better. They are different, complementary tools in the innovation management toolbox. Each should be used at its proper time and place.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | Leave a comment

“Fail often” but not too often

“Failing fast and often” has become an innovation mantra. Of course, not everyone takes this wisdom at face value. Even more tellingly, no one has taken the trouble to explain what “fast” and “often” precisely mean when applied to failure.

Now, some scientific data seems to have emerged, thanks to the team led by Robert C. Wilson from the University of Arizona, Tucson. Dr. Wilson and his colleagues examined the role of difficulty of training on the rate of learning. They found that the rate of learning is maximized when the difficulty of training is adjusted to an optimal level. They further found that the maximum learning takes place when the optimal training accuracy (a measure of difficulty) is about 84% or, conversely, when the optimal rate of training error is around 16%. In other words, one should be five times more right than wrong to learn successfully.

Sure, I understand the difference between innovation and the learning process “in case of binary classification tasks and stochastic gradient-descent-based learning rules” studied by Dr. Wilson’s team. Sure, I understand that innovation is a lot of experimentation, and experimentation implies a lot of failures.

What I don’t understand is our obsession with “failure,” with treating it as an end, not a means, of the innovation process. (And I definitely refuse to celebrate failures.) What I don’t understand is our willingness to replace the data-driven innovation discovery with a primitive A/B testing.

In order to succeed in innovation, we need a few things preceding experimentation. We need innovation strategy; we need innovation processes; we need innovation metrics, training, and incentives. That what will make our experimentation more efficient and repeatable than winning in a lottery.

Image is taken from the article by Wilson et al. (2019)

Posted in Innovation | Tagged , , , , | Leave a comment

Crowdsourcing: two approaches, two objectives

In my previous post, I reminded the original definition of crowdsourcing by Jeff Howe: “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” I emphasized that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost.

I further outlined two major jobs that can be outsourced via crowdsourcing, adding capacity and accessing expertise, and gave definitions of both. Some of the readers have asked me to elaborate on the difference between the two approaches to crowdsourcing. Here is what I came up with.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking the crowd to deliver these small pieces. The members of the crowd usually don’t need any special training to perform the job. However, it’s the responsibility of the project sponsor to provide the crowd with a clear direction on how each piece of the job should be completed. It’s also the sponsor’s responsibility to design a protocol for assembling the whole job from its sub-components.

Organizations use the adding capacity crowdsourcing when the desired job requires the amount of resources organizations don’t have. Take, for example, the Common Voice project by Mozilla. Common Voice is a dataset that consists of about 1,400 hours of recorded human voice samples from more than 42,000 contributors in 18 different languages. Obviously, Mozilla couldn’t have composed such a dataset using only its own 1,200 employees.

The very objective of the adding capacity crowdsourcing poses a requirement with regards to the size of the crowd. In most cases, the larger crowd for adding capacity, the better. For example, adding additional contributors to the Common Voice project would have allowed Mozilla to expand the dataset, both in terms of recorded hours of speech and the number of covered languages.

I define accessing expertise as the extraction of the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, experience, and skills originating anywhere outside an organization. (In case of internal crowdsourcing, the accessed expertise will originate anywhere within the organization, but outside the unit that is sponsoring the crowdsourcing project.)

Organizations use the accessing expertise crowdsourcing when they want to solve a problem, the problem that prevents the organization from achieving an important objective like designing a new product, completing a project, or optimizing performance. When launching an accessing expertise crowdsourcing campaign, the campaign sponsor must clearly define the problem and explicitly outline the requirements all successful solutions are expected to meet.

The members of the crowd should possess certain knowledge, expertise, and skills to be able to solve the problem – and the more complex the problem, the more experienced the members of the crowd should be.

Moreover, many complex technical and business problems require completely novel, unexpected, and even unorthodox solutions – meaning that the pool of incoming contributions should include many different ways of solving the problem. This objective of the accessing expertise crowdsourcing poses a specific, unique for this approach, requirement for the crowd: it must be very diverse to provide the needed diversity of the incoming solutions. On the other hand, the crowd size by itself is, perhaps, a secondary consideration for accessing expertise crowdsourcing but larger crowds are usually more diverse.

Understanding the difference between the two approaches to crowdsourcing – and the rules they are governed by – is very important because the lack of such understanding is a frequent cause of failure of crowdsourcing campaigns.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | Leave a comment

What is crowdsourcing?

In recent years, crowdsourcing has become a popular topic in business publications and social media. Yet, its acceptance as a practical problem-solving tool has been slow. Why? Because there is a widespread, often completely paralyzing, uncertainty over what crowdsourcing is and what it can (or can’t) do. As a result, crowdsourcing is often used in the wrong way, and when the outcome proves disappointing, it is crowdsourcing itself that gets the blame for being “ineffective.”

First of all, it’s important to prevent the expansive use of the term “crowdsourcing” and keep a clear distinction between crowdsourcing and other communication and problem-solving tools, such as online networking and brainstorming. Equally important is to provide a clear explanation of what crowdsourcing can do for organizations to achieve their strategic innovation objectives.

Let me start with a definition of crowdsourcing – the original proposed by Jeff Howe in 2006 – which I still consider the most comprehensive and precise. Howe defined crowdsourcing as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.”

What is very important in this definition is that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost in our conversations about crowdsourcing.

I believe that there are two major types of “jobs” organizations can outsource using crowdsourcing: adding capacity and accessing expertise.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking a crowd of contributors to perform the whole job by delivering smaller components. Another term for adding capacity is “microtasking,” with Mechanical Turk being the most prominent microtasking marketplace.

Organizations would use adding capacity crowdsourcing when the completion of a job requires the amount of human resources organizations can’t provide on their own. This type of crowdsourcing usually doesn’t require any substantial training of the crowd. However, organizations must provide the members of the crowd with clear directions on how to precisely accomplish the required “mictrotask.” Organizations also must develop a robust protocol of collecting, collating, and interpreting the combined results.

(A more sophisticated version of adding capacity crowdsourcing, a concept of a “flash organization,” has been developed to deal with complex, open-ended tasks that can’t easily be broken into smaller identical parts.)

I define accessing expertise crowdsourcing as a process of exploring the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, and skills from anywhere outside the organization (or anywhere outside a particular function or unit in an organization if we deal with internal crowdsourcing). In my opinion, there is no established academic term for accessing expertise crowdsourcing, although the term “crowdsourced innovation” comes very close.

Accessing expertise crowdsourcing can be further divided into idea generation and problem-solving, which I proposed calling the “bottom-up” and “top-down” crowdsourcing, respectively (and wrote about benefits and drawbacks of both here and here).

Both major types of crowdsourcing, adding capacity and accessing expertise, follow their own rules of engagement which must not be confused if organizations want to use crowdsourcing effectively and efficiently. I’ll cover these rules in more detail in the upcoming posts.

Images provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | Leave a comment