Innovation and U.S. National Security

The important role innovation plays in economic growth and prosperity of the world’s nations is well documented. A recent report by the Council on Foreign Relations, a think tank specializing in U.S. foreign policy and international affairs, highlights the crucial role played by innovation in the area that doesn’t normally attract public attention: the American national security.

Composed by a diverse group of 20 experts and titled “Innovation and Security. Keeping Our Edge,” the report argues that after leading the world in technological innovation for the past three-quarters of a century, the United States is now at risk of falling behind its competitors. This may have profound negative consequences for U.S. national security.

The report points to the following troubling trends in the U.S. innovation policies:

  • Federal investment in R&D as a percentage of the GDP is declining: from a peak of above 2% in the 1970s to about 1% in 2001 to 0.7% in 2018. In 2015, for the first time since World War II, the federal government provided less than half of all funding for basic research.
  • U.S. current trade policies needlessly alienate the country’s long-term partners, resulting in rising costs for American tech firms and impeding the adoption of U.S. technology in foreign markets.
  • A lack of strong educational initiatives at home has hurt the development of domestic STEM talent. At the same time, new immigration barriers diminish the country’s ability to attract highly educated foreigners. The number of new international students enrolling at American institutions fell by 6.6% during the 2017-2018 academic year. Further limiting the number of H-1B visas has hampered tech firms that rely on top global talent to staff their operations.
  • A persistent divide between the technology and policymaking communities makes it more difficult for the Department of Defense and intelligence community to acquire advanced technologies from the private sector and to draw on technical talent.
  • China has become a formidable strategic competitor challenging the U.S. leadership in a range of emerging technologies, such as AI and data science, advanced battery storage, advanced semiconductor technologies, 5G, quantum computing, robotics, genomics, and synthetic biology.

The report’s major recommendations are:

  • Restore federal funding for R&D to its historic average, from 0.7% to 1.1% of GDP (or from $146 billion to $230 billion in 2018 dollars).
  • Make an additional strategic investment in universities to the tune of $20 billion a year of federal and state monies for five years.
  • Adopt moonshot approaches to society-wide national security problems that would support innovation in the key emerging technologies mentioned above. Encourage and support American startups working in this space.
  • Make it easy for foreign graduates of U.S. universities in scientific and technical fields to remain and work in the United States. Automatically grant lawful permanent residence (“green card”) to those who earn a STEM master’s or doctorate degrees.
  • While continuing to confront China on cyber espionage and IP theft, stop over-weaponizing trade policy. The best way to answer the China challenge is to compete more effectively. (“Slowing China down is not as effective as outpacing it.”)

The report makes it very clear that the United States urgently needs a national security innovation strategy to ensure its leadership in foundational and emerging technologies over the next 20 years. Actions are needed over the next five years. Although not saying this explicitly, the report leaves no doubt that the consequences of inaction will be dire.

 Image: https://www.cfr.org/blog/keeping-our-edge-overview-innovation-and-national-security-task-force-report

Posted in Global Innovation, Innovation | Tagged , , , , , , , , , , , , , , , , , | 4 Comments

Being an expert: traveling the same road again and again

There are several reasons for the slow adoption of crowdsourcing as a practical problem-solving tool.

One of them is the lack of trust in the intellectual power of the crowd, its ability to tackle complex problems. Almost everyone would agree that the proverbial wisdom of crowds can be applied to a “simple” task, such as creating a corporate logo or naming a city landmark. However, when it comes to answering a question that requires specialized knowledge, organizations prefer to turn to experts.

This preference obviously sits well with the experts themselves. They’re often scornful of the idea that someone with no immediate experience in the field can solve a problem that they could not. This sentiment was eloquently summarized in a 2010 article: “Our trust in the expert appears to be increasingly supplanted by a willingness to rely on the knowledge derived from crowds of amateurs.”

“Crowds of amateurs.” Harsh words, huh?

Pitting experts against crowds is plain silly. Experts represent an essential part of any crowdsourcing campaign; in fact, crowdsourcing is impossible without experts. Only experts can identify and properly formulate problems facing organizations. Only experts can properly evaluate incoming external submissions to select those that make sense. Only experts can successfully integrate external information with the knowledge available in-house. It’s only at this midpoint of the problem-solving process – at the stage of generating potential solutions to the problem – that crowds are usually superior to experts.

Why? A recent study in the field of neurobiology provides useful insight. A team of scientists from Cold Spring Harbor Laboratory led by Dr. Anne Churchland analyzed neuronal activity in the brains of mice forced to learn new decision-making skills.

As the mice progressed through learning new tricks, more and more neurons in their brains got involved. However, the neuron activity rapidly became very selective: the neurons only responded when the mice made one choice and not another. This pattern became even stronger as the mice learned how to do a task better (i.e., became an “expert” in this task). Moreover, when the expertise was fully achieved, the mouse’s brain was ready for that expert decision even before the mouse began executing the task.

In other words, the “expert” mice know how to solve the problem even before starting to solve it!

In contrast, the neuronal activities in the brains of “non-expert” mice remain non-selective – meaning that the mice would approach the task with an “open mind.”

Were these findings held for humans, the implication would be that experts approach the problem with the patterns that are already pre-formed in their brains by their prior experience. In contrast, amateurs may approach the same problem from a completely different angle – and the more amateurs are involving in solving the problem, the more chance that a completely novel, unorthodox solution could be found.

That means that when solving a problem requires prior experience (e.g., when solving a similar problem as in the past), organizations should engage experts. However, if the problem is novel and may require a fresh look at it, crowds would be a better choice.

There is no sense to discuss which tool, experts or crowds, is better. They are different, complementary tools in the innovation management toolbox. Each should be used at its proper time and place.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | 1 Comment

“Fail often” but not too often

“Failing fast and often” has become an innovation mantra. Of course, not everyone takes this wisdom at face value. Even more tellingly, no one has taken the trouble to explain what “fast” and “often” precisely mean when applied to failure.

Now, some scientific data seems to have emerged, thanks to the team led by Robert C. Wilson from the University of Arizona, Tucson. Dr. Wilson and his colleagues examined the role of difficulty of training on the rate of learning. They found that the rate of learning is maximized when the difficulty of training is adjusted to an optimal level. They further found that the maximum learning takes place when the optimal training accuracy (a measure of difficulty) is about 84% or, conversely, when the optimal rate of training error is around 16%. In other words, one should be five times more right than wrong to learn successfully.

Sure, I understand the difference between innovation and the learning process “in case of binary classification tasks and stochastic gradient-descent-based learning rules” studied by Dr. Wilson’s team. Sure, I understand that innovation is a lot of experimentation, and experimentation implies a lot of failures.

What I don’t understand is our obsession with “failure,” with treating it as an end, not a means, of the innovation process. (And I definitely refuse to celebrate failures.) What I don’t understand is our willingness to replace the data-driven innovation discovery with a primitive A/B testing.

In order to succeed in innovation, we need a few things preceding experimentation. We need innovation strategy; we need innovation processes; we need innovation metrics, training, and incentives. That what will make our experimentation more efficient and repeatable than winning in a lottery.

Image is taken from the article by Wilson et al. (2019)

Posted in Innovation | Tagged , , , , | Leave a comment

Crowdsourcing: two approaches, two objectives

In my previous post, I reminded the original definition of crowdsourcing by Jeff Howe: “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” I emphasized that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost.

I further outlined two major jobs that can be outsourced via crowdsourcing, adding capacity and accessing expertise, and gave definitions of both. Some of the readers have asked me to elaborate on the difference between the two approaches to crowdsourcing. Here is what I came up with.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking the crowd to deliver these small pieces. The members of the crowd usually don’t need any special training to perform the job. However, it’s the responsibility of the project sponsor to provide the crowd with a clear direction on how each piece of the job should be completed. It’s also the sponsor’s responsibility to design a protocol for assembling the whole job from its sub-components.

Organizations use the adding capacity crowdsourcing when the desired job requires the amount of resources organizations don’t have. Take, for example, the Common Voice project by Mozilla. Common Voice is a dataset that consists of about 1,400 hours of recorded human voice samples from more than 42,000 contributors in 18 different languages. Obviously, Mozilla couldn’t have composed such a dataset using only its own 1,200 employees.

The very objective of the adding capacity crowdsourcing poses a requirement with regards to the size of the crowd. In most cases, the larger crowd for adding capacity, the better. For example, adding additional contributors to the Common Voice project would have allowed Mozilla to expand the dataset, both in terms of recorded hours of speech and the number of covered languages.

I define accessing expertise as the extraction of the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, experience, and skills originating anywhere outside an organization. (In case of internal crowdsourcing, the accessed expertise will originate anywhere within the organization, but outside the unit that is sponsoring the crowdsourcing project.)

Organizations use the accessing expertise crowdsourcing when they want to solve a problem, the problem that prevents the organization from achieving an important objective like designing a new product, completing a project, or optimizing performance. When launching an accessing expertise crowdsourcing campaign, the campaign sponsor must clearly define the problem and explicitly outline the requirements all successful solutions are expected to meet.

The members of the crowd should possess certain knowledge, expertise, and skills to be able to solve the problem – and the more complex the problem, the more experienced the members of the crowd should be.

Moreover, many complex technical and business problems require completely novel, unexpected, and even unorthodox solutions – meaning that the pool of incoming contributions should include many different ways of solving the problem. This objective of the accessing expertise crowdsourcing poses a specific, unique for this approach, requirement for the crowd: it must be very diverse to provide the needed diversity of the incoming solutions. On the other hand, the crowd size by itself is, perhaps, a secondary consideration for accessing expertise crowdsourcing but larger crowds are usually more diverse.

Understanding the difference between the two approaches to crowdsourcing – and the rules they are governed by – is very important because the lack of such understanding is a frequent cause of failure of crowdsourcing campaigns.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | Leave a comment

What is crowdsourcing?

In recent years, crowdsourcing has become a popular topic in business publications and social media. Yet, its acceptance as a practical problem-solving tool has been slow. Why? Because there is a widespread, often completely paralyzing, uncertainty over what crowdsourcing is and what it can (or can’t) do. As a result, crowdsourcing is often used in the wrong way, and when the outcome proves disappointing, it is crowdsourcing itself that gets the blame for being “ineffective.”

First of all, it’s important to prevent the expansive use of the term “crowdsourcing” and keep a clear distinction between crowdsourcing and other communication and problem-solving tools, such as online networking and brainstorming. Equally important is to provide a clear explanation of what crowdsourcing can do for organizations to achieve their strategic innovation objectives.

Let me start with a definition of crowdsourcing – the original proposed by Jeff Howe in 2006 – which I still consider the most comprehensive and precise. Howe defined crowdsourcing as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.”

What is very important in this definition is that crowdsourcing is not just about a crowd; it’s about outsourcing a job, a point that is often lost in our conversations about crowdsourcing.

I believe that there are two major types of “jobs” organizations can outsource using crowdsourcing: adding capacity and accessing expertise.

I define adding capacity as the process of splitting a large job into small, usually identical, pieces and then asking a crowd of contributors to perform the whole job by delivering smaller components. Another term for adding capacity is “microtasking,” with Mechanical Turk being the most prominent microtasking marketplace.

Organizations would use adding capacity crowdsourcing when the completion of a job requires the amount of human resources organizations can’t provide on their own. This type of crowdsourcing usually doesn’t require any substantial training of the crowd. However, organizations must provide the members of the crowd with clear directions on how to precisely accomplish the required “mictrotask.” Organizations also must develop a robust protocol of collecting, collating, and interpreting the combined results.

(A more sophisticated version of adding capacity crowdsourcing, a concept of a “flash organization,” has been developed to deal with complex, open-ended tasks that can’t easily be broken into smaller identical parts.)

I define accessing expertise crowdsourcing as a process of exploring the proverbial “wisdom of crowds,” a process of collecting expertise, knowledge, and skills from anywhere outside the organization (or anywhere outside a particular function or unit in an organization if we deal with internal crowdsourcing). In my opinion, there is no established academic term for accessing expertise crowdsourcing, although the term “crowdsourced innovation” comes very close.

Accessing expertise crowdsourcing can be further divided into idea generation and problem-solving, which I proposed calling the “bottom-up” and “top-down” crowdsourcing, respectively (and wrote about benefits and drawbacks of both here and here).

Both major types of crowdsourcing, adding capacity and accessing expertise, follow their own rules of engagement which must not be confused if organizations want to use crowdsourcing effectively and efficiently. I’ll cover these rules in more detail in the upcoming posts.

Images provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , | 3 Comments

Is your shovel good enough to hit the nail?

Imagine you’re outside and need to hit the nail into the wall to hang a picture. You select the nail of the correct size and then look around for an appropriate hitting tool. You pick up a new, shiny shovel that you recently bought in the nearest Home Depot, aim it at the nail and – bang! – throw a punch. The nail bends in response. You look disapprovingly at the shovel and say: “Such a fancy tool – and not a cheap one, too! – but completely useless.”

Sounds ridiculous, right? But that’s what often happens when people and organizations decide to use a new “shiny” tool without figuring out first how to properly handle it. Facing disappointing results, frustrated operators conclude: “This tool doesn’t work for me/us.”

Crowdsourcing is one of such new shiny tools. In recent years, its popularity has skyrocketed. Unfortunately, along the way, the definition of crowdsourcing has lost its original meaning. It became synonymous with just about every event happening online, especially if this event engages a substantial number of people.

Mixing crowdsourcing with online networking is a frequent mistake – that’s why we have oxymorons like “crowdsourcing on Facebook” (or Twitter, or Yelp).

Another popular mistake is confusing crowdsourcing with brainstorming. In brainstorming, a question is presented to several people who’re asked to come up with answers. As brainstorming session progresses, people propose their own ideas, capitalize on ideas of others, or, perhaps, redefine the question itself. Many folks mistakenly believe that if you replace a group of eight to ten (reportedly the optimal number of people for brainstorming) with a crowd of dozens or even hundreds, you’re not brainstorming but crowdsourcing.

But this is not crowdsourcing. Crowdsourcing is different from brainstorming in one very important aspect: it requires independence of opinions, a feature of crowdsourcing highlighted in the classic James Surowiecki’s book, “The Wisdom of Crowds.” In contrast to brainstorming, during a crowdsourcing campaign, you must make sure that the members of your crowd, either individuals or small teams, provide their input independently of the opinion of others. It’s this aspect of crowdsourcing that results in the delivery of highly diversified, original, and often completely unexpected solutions to the problem -as opposed to brainstorming that almost always ends up with the group reaching a consensus.

Why is important to keep a crisp border between crowdsourcing and other problem-solving tools, such as brainstorming? Because if we want organizations to start using crowdsourcing in their innovation practices, we need to ensure that they know the basic rules of applying this technique.

Take, for example, a 2017 Harvard Business Review article titled “Rethinking Crowdsourcing.” The article described a review of 87 crowdsourcing projects aimed at generating new consumer product ideas. In the course of each project, managers allowed participants to “like” each other submissions – a feature that doesn’t belong in crowdsourcing.

The result? Some contributors began to “like” each other’s ideas so that the apparent value of their respective contributions became overinflated. No wonder that when the submitted proposals were assessed by independent evaluators, no correlation has been found between most “likable” ideas and those that led to successful products.

The conclusion of the article was even more troubling: “It can be unwise to rely on the crowd.” Not an encouraging statement for those who want to start exploring what crowdsourcing can do for their organizations!

I was equally puzzled by another, more recent HBR article, “Research: for crowdsourcing to work, everyone needs an equal voice.” Sure, the authors, two academic researchers, have come up with a correct conclusion: “In order for the wisdom of crowds to retain its accuracy for making predictions, every member of the group must be given an equal voice, without any one person dominating.” Yet, their usage of the generic term “the wisdom of crowds” – while describing a process that mixed both crowdsourcing and brainstorming – made me somewhat uneasy.

It’s impossible to overestimate the role that academic research could play in making crowdsourcing a mainstream problem-solving tool. There are two things that I, a crowdsourcing practitioner, expect from my academic colleagues. First, a solid classification of existing types of crowdsourcing. Second, a clear definition of what crowdsourcing is and what it is not. Muddying the “terminology waters” isn’t helpful.

 Image provided by Tatiana Ivanov

Posted in Crowdsourcing, Innovation | Tagged , , , , , , , | 1 Comment

Does “process” kill innovation?

Reading Steve Blank is always a pleasure. Not only is he among the world’s best scholars of corporate innovation; his ability to explain complex things in a simple language is unparallel.

Blank’s recent HBR piece, “Why Companies Do ‘Innovation Theater’ Instead of Actual Innovation,” is no exception. He persuasively argues that as large organizations face continuous disruption, their ability to innovate is no more an “add-on”; it’s their way to survival. And yet, they consistently fail to innovate.

The reason, as Blank sees it, is that while transforming from ambitious startups bubbling with innovative ideas into mature commercial entities, organizations build “processes.” Although processes diminish the overall risk for an organization to malfunction, each layer of the process reduces the organizational agility and its responsiveness to new threats and opportunities. Eventually, the organizations begin to value “process” over the “product” – and that kills innovation.

At this point, corporate innovation becomes “innovation theater,” a set of “activities” that may build and shape culture but fail to come up with viable products. (This idea is very close to my heart: back in 2015, I fretted that organizations were faking innovation.)

While Blank’s explanation of what is wrong with innovation is right on point, as usual, I was surprised by his uncharacteristic reluctance to propose ways to address the problem. Sure, Blank argues that innovation activities and processes should be part of an overall plan, and his idea of an Innovation Doctrine is an intriguing one, however vaguely articulated.

At the same time, I’d disagree with Blank that processes as such hurt innovation. In my opinion, corporate innovation suffers not from the overabundance of processes but, quite to the contrary, from the paucity of them. We still don’t have a sustainable process to handle the proverbial “innovation funnel,” to move promising inventions and discoveries all the way from the front end of innovation to its back end.

That’s what we need to focus on. And we must hurry up, as the United States is losing its place at the top of the global innovation indexes. There is no time to waste.

 The image created by Tatiana Ivanov

Posted in Innovation | Tagged , , , , | 3 Comments

Are We Heading For Crowdsourced VR Health Care?

Every now and then you see a headline just like the one overhead, indicating that virtual reality is moving into some unexpected new industry or enterprise. It can all be a little bit dizzying, even if you’ve been following VR since its humble modern-era beginnings as a concept from Oculus demos. But what’s really interesting – particularly with regard to health care, one might argue – is exploring how a simple VR application in an unexpected area could, in theory, evolve or branch out. Here are a few examples of what I mean.

VR in Museums – Lots of museums offer virtual tours online, through which you can click through galleries and exhibitions. But now, venues as prestigious as the British Museum in London are partnering with VR companies to design full-fledged VR tours as well. It’s a whole new way for people to explore remotely. But think of the implications for tourism more broadly. Could more experiences like these lead to full VR city walking tours, incorporating multiple attractions at once?

VR in Casinos – Various components of casinos have been reimagined in VR. Naturally, a few simple poker experiences led the way. We’ve since seen some of the popular free slots displayed at international SlotSource platforms adapted to VR as well. But what if these slots, poker games, and other casino experiences weren’t one-off VR games? Could this lead to entire virtual casinos within which gamers could mingle and stop off at games of their choice?

VR in Cars – Racing was one of the early genres to really reach the potential of VR gaming. As a result, there are plenty of different VR driving experiences. Some are more realistic than others, but what if developers focused more on the realistic? Could VR driving be used to test young drivers? Or help people test drive cars they might want to buy in rapid succession? Or even help city planners to do practice runs of new traffic patterns?

The examples could carry on, but you get the idea. An individual VR experience that works well can hint easily at a whole category ripe for development. And that logic, when applied to VR in health care – and specifically diagnostics – is fascinating.

VR as a diagnostic tool has been buzzed about for years now, actually, even dating back to the days before the technology’s commercial availability. As VR has become better known though, this idea has taken clearer shape. Earlier this year, Wired did a relatively brief but helpful look at VR’s applications in diagnosing mental diseases (and in some cases treating certain conditions, like PTSD). The thinking, right now, is that through careful VR analysis, medical professionals can accurately determine what may be ailing a given patient.

Expand this concept beyond mental illness, however, and imagine applications with social components, and you can begin to see how the idea – like those examples listed above – could branch out significantly. VR examination apps with diagnostic components and social capabilities, specifically, would allow patients to broadcast their own injuries and ailments to people remotely, in order to receive diagnostic opinions. Ideally, that would mean physicians, but it’s highly possible it could mean other things too: friends, family, medical communities online, or even social network groups dedicated to various medical purposes.

In short, while it may seem like a stretch now, there’s a certain logic to the idea of near-future crowdsourced diagnostics in VR.

Posted in Crowdsourcing, Health Care | Leave a comment

A confession of open innovation manager

Over the years, I’ve worked with many companies that tried to apply open innovation approaches to solving their problems. Some of them have succeeded, some have failed—with the rest falling in between. But there was one common feature that all successful companies shared: they had established internal innovation programs.

Please, note that I’m not saying “centralized” or even “formal” but established, meaning that internal structures and processes were already in place to identify issues (“jobs-to-be-done”) to be resolved by innovation tools. And yes, in many cases the programs were formal, led by corporate R&D, Marketing, or a specialized innovation unit. However, I can’t remember any company that succeeded in open (“external”) innovation if it was not preceded by the adoption of internal.

Why? Among many reasons, one stands out for me. Innovation requires extensive internal business development, a process by which members of the innovation team try to “sell” new ways of solving problems to other, often skeptical and reluctant, corporate functions and units. This isn’t easy by itself; it’s for a reason innovation is called change management process.

But with the added complexity that comes with open innovation, this internal business development may become a nightmare. A small open innovation unit (it always small because open innovation teams are routinely under-resourced) is struggling to find internal “clients” to do things that sound complicated and often counterintuitive. It’s like going door-to-door around a neighborhood offering to buy a product no one has heard about.

When a newly founded open innovation team joins a larger corporate innovation function, selling its “products” becomes more organic and therefore more manageable. When the open innovation team arrives in an empty space, nothing works.

For someone who for the past 15+ years has been preaching the virtues of open innovation, this might be a strange confession to make. But I’ll make it nonetheless: there is no such thing as open corporate innovation.

There is innovation, a process deeply rooted in the corporate strategy and operations that addresses the company’s most strategic issues. This innovation has a single body one side of which represents tools utilizing the collective wisdom of the company’s employees. The other side of this body extends behind the corporate walls trying to reach out to the diverse pools of external talent.

Creating open innovation programs without establishing internal ones looks to me like a tree without roots. Or, perhaps, a house with a roof but without walls.

Image credit: Tatiana Ivanov

Posted in Innovation | Tagged , , | 5 Comments

Measuring innovation, one patent at a time (or all of them at once)

Measuring innovation is tough. To begin with, innovation is rooted in creativity—and measuring creativity isn’t straightforward, to say the very least. Besides, innovation is about transforming creativity into value—and measuring value isn’t easy, either, even if you measure it in dollars (or any other currency).

But you’re still expected to measure it. As someone great (whether it’s Deming or Drucker) has said, you can’t manage what you can’t measure.

Currently, the most common proxy for innovation is patents. This is the measure of innovation that I used myself to compile a list of specific socioeconomic factors that favor or obstruct innovation.  More precisely, it’s a combination of the number of filed patents (innovation “quantity,” so to speak) and the index of their citation, which is supposed to reflect patent quality (on the assumption that the more a particular patent is cited, the more influential it is).

It’s easy to criticize patents as a measure of innovation. The patent quantity is obviously the least defensible of the two as the number of frivolous (or simply “cloned”) patents keeps rising. Unfortunately, as the pattern of patent citation changes over time, comparing the quality of patents issued at different times becomes almost useless, too.

Is there anything better than the patent number and citation index? Prof. Dimitris Papanikolaou of the Kellogg School of Management thinks so. Prof. Papanikolaou and his colleagues have decided to analyze the patent text. They reasoned that if a patent was truly groundbreaking, then its text would be unique, i.e., dramatically different from any previous patent. However, as subsequent inventors would start building on this patent, many follow-up patents would have similar text. The researchers, therefore, assigned higher quality scores to patents with text that did not resemble earlier patents but did resemble subsequent ones.

By analyzing the texts of more than nine million patents filed with the U.S. Patent Office since 1836, the researchers showed that their measure of patent quality very closely correlated with the patent citation index. Moreover, their measurement performed even better than the citation index in predicting breakthrough inventions of the 19th and 20th centuries, including the telegraph, television, plastics, and genetic-engineering technologies.

It’s interesting that the new measure of innovation is still patent-based. I wonder if a measurement based on something else can be created.

The image credit: https://www.inventright.com/help/blog/do-you-really-need-a-patent

Posted in Innovation | Tagged , , , | Leave a comment