Working a Crowd

imagesIf crowdsourcing has not yet become a mainstream innovation tool, this is definitely not for the lack of attention. Crowdsourcing remains a topic of intense academic studies, and a recent paper by researchers from Simon Frazer University in Canada is a case in point. Written by John Prpic and co-authors and titled “How to Work a Crowd: Developing Crowd Capital through Crowdsourcing,” the article provides some theoretical background and practical guidance to managers wanting to use crowdsourcing to advance their business objectives. Presented below are a few key points I’ve extracted from the paper along with my comments.

  1. The authors begin with a crowdsourcing typology. They define crowdsourcing as “an on-line, distributed problem solving model…[that allows]…approaching crowds and asking for contributions [that] can help organization develop solutions to a variety of business challenges.” Using a number of criteria, Prpic et al. divide crowdsourcing into four categories: crowd-voting, idea crowdsourcing, micro-task crowdsourcing and solution crowdsourcing. Naturally, there are differences between the four categories in the ways crowdsourced information is collected and processed; consequently, Prpic et al. suggest that managers clearly understood their business needs to match them to a specific crowdsourcing category.

My comment: Defining and categorizing things is what academic researchers are especially good at. I like their definition of crowdsourcing, and their typology looks solid. My only concern is that for managers who have no prior experience with crowdsourcing, choosing between different crowdsourcing types might be too complicated, if not outright intimidating. My advice to them would be this: remember that above all other things, crowdsourcing is a question. And it doesn’t really matter what this question is about, for as long as it is well-thought-out, properly defined and clearly articulated. So, forget for now about definitions and typology; focus instead on problem definition and try to understand what kind of responses will represent a solution to your problem. If this understanding will be expressed in a simple and coherent problem statement followed by in a set of clear-cut requirements to a successful solution, your crowdsourcing campaign will be just fine.    

  1. The other indispensable part of any crowdsourcing campaign, in addition to a problem, is a crowd, and Prpic et al. correctly point out the need to “construct” a crowd. Two aspects are especially important in this context: the size of the crowd and its composition. The authors seem to favor an idea that larger crowds are generally more advantageous than smaller ones; however, they also see benefits of working with smaller (“closed”) crowds. At the same time, they state that “[d]ifferent crowds possess different knowledge, skills and other resources, and accordingly, can bring different types of value to an organization.” Taking to a logical conclusion, that would mean that organization should try to construct a customized crowd for each crowdsourcing campaign.

My comment: Constructing crowds of meaningful size and diversity is a long and          expensive process. Sure, once “constructed,” the crowds can be custom-modified, and available technology makes this task feasible. Yet, I’d strongly advise against playing too much with crowd size and composition. First, selecting “correct” participants for your crowdsourcing campaign only makes sense if you know them all. This is possible if you work with crowds composed of your own employees and/or a pool of trusted collaborators (academic partners, suppliers, selected customers, etc.). However, if you go outside your company–when using external innovation portals or innovation intermediaries–such a selection becomes cost-ineffective at best and outright impossible at worst. Second, even more importantly, the popular belief that only people with “relevant knowledge and expertise” can solve your problem is plain wrong. The experience of crowdsourcing experts, such as InnoCentive, clearly shows that innovation can come from completely unexpected–and therefore unpredictable–sources; moreover, it was proven time and again, that a solver’s likelihood of solving a problem actually increases with the distance between the solver’s own field of technical expertise and the problem’s domain. Instead of trying to select correct “solvers” to their problem, managers would better spend time to describe what a correct “solution” to this problem should be. As pointed above, composing a powerful problem statement followed by in a set of clear-cut requirement to a requested solution is a better way to make your crowdsourcing campaign successful.

  1. The authors emphasize the fact that simply engaging a crowd and successfully acquiring the desired contributions is not enough for the ultimate success of a crowdsourcing campaign. Equally important is a process of internal assimilation of the acquired crowdsourced information. To achieve this goal, organizations “need to institute internal organizational processes to organize and purpose the incoming knowledge and information.”

My comment: Here I completely agree with Prpic et al., for I’ve witnessed first-hand multiple examples of nicely designed and skillfully implemented crowdsourcing campaigns that became eventual failures just because the campaign organizers had no established internal structure to “marry” outside knowledge with the one produced inside. First, more often than not, outside knowledge, especially collected in the course of solution crowdsourcing, comes only “half-baked,” i.e. in need of further processing using internal resources. Second, there is a cultural aspect: the notorious “not invented here” syndrome is alive and well and is capable of preventing external knowledge from taking hold in any organization. Managers who dream of using crowdsourcing should therefore start with “internal crowdsourcing” that would bring together their own employees first. Such internal crowdsourcing could be run through internal innovation networks (INNs). INNs not only  foster the very culture of collaboration, bringing together corporate units (R&D, business development, marketing, etc.) that in many organizations often have no institutional platform to communicate on strategic issues; they also provide intellectual and operational support for the company’s external innovation programs. Once an organization has mastered the process of internal crowdsourcing, going outside often means just expanding its technological capabilities (the existence of other important issues, such as IP protection, notwithstanding).

In conclusion, academic literature keeps producing useful examples of “best practices” in using crowdsourcing to solve various technological and business problems. Yet, as we all know, the best practices are those that work specifically for our organization. So managers aspiring to become masters of crowdsourcing should not feel paralyzed by the growing amount of (often conflicting) crowdsourcing literature; they should start running their own crowdsourcing campaigns. After all, the best–and the only–way to learn swimming is to get into the water.

Posted in Crowdsourcing | Tagged , , , , , , , , , , , , , , , | Leave a comment

Not All Innovation Models Are Created Equal

downloadIt’s one of the most popular topics in innovation discussions: why innovation fails? How many times have you heard the following narrative? With great fanfare, the XYZ Company launches an innovation initiative. Employees are urged to submit ideas, and a great number of these are generated in short time. And then…nothing happens. The vast majority of the ideas are simply useless, but for a few that make sense, there is no budget. As unwanted ideas pile up, the employee enthusiasm wanes. The initiative quietly dies, and innovation becomes “a four-letter word.”

The usual suspects to blame have long been identified: poor customer insight, the lack of executive leadership, the lack of innovation strategy, weak innovation culture–make you pick. Yet no one seems to ask a simple question: does the model of innovation described above work?

In organizations, innovation can flow in two major directions: bottom-up and top-down. In the bottom-up model, the one the XYZ Company has chosen, the focus of innovation is on ideas. The ideas are collected on the ground and then channeled up to the organization’s leadership. The popularity of this model is fueled by a widespread belief that innovation must harness a collective wisdom of the whole organization. Unfortunately, the bottom-up model has a number of serious flaws. First, employees, especially those at lower steps of the organizational ladder, usually have only a vague understanding of the strategic goals of the organization; consequently, their ideas are often completely misaligned with the organization’s real needs. Second, the burden of evaluating and implementing submitted ideas usually falls on business units that already have a full load of their own stuff. To make room for “newcomers,” business units have to kill their existing projects, and killing projects is not something organizations are good at. Third, in most large organizations R&D budgets for the next year are usually drafted no later than the Q3 of the prior year. That means that new projects receive no immediate funding and have to wait for at least a few months to get paid for, at which point their utility is often highly questionable (not to mention how detrimental this delay is to employee morale). One can go on with the criticism, but the bottom line is that for organizations with shorter history of innovation programs the bottom-up model of innovation doesn’t work.

Is there a plausible alternative? Yes, it’s the top-down approach. In the top-down model of innovation, the focus is on problems. The organization’s leadership formulates problems that are crucial for the organization and then moves them down the ladder for employees to suggest solutions to these problems. The ways the problems are presented to the employees may vary and can include innovation jams, innovation contests or challenges posted to internal innovation networks. The benefits of this approach are numerous. First, for as long as the problem addresses an organization’s strategic need, submitted solutions will always “make sense,” regardless of whether they’re successful or not. Second, coming from the top of the organization, the proposed problem from the very beginning has its own executive sponsor whose responsibility is to ensure proper funding for effective solutions. Third, the organization remains in full control of the number of innovative initiatives it pursues: if it can afford multiple initiatives launched in parallel, fine; if it struggles with the lack of resources, single initiatives can be handled one at a time. Finally, giving specific feedback to the employees about the value of their solutions is much easier and productive than explaining to them why their ideas were ignored. Regardless of whether an employee “wins” or “loses” an innovation contest, he or she feels engaged and appreciated.

No, I’m not saying that the bottom-up model of innovation has no right to exist. In mature organizations–where employees know what kind of innovation the organization needs; where a robust protocol for evaluating unsolicited ideas exists; where a special fund to pay for “spontaneous” projects is set aside–it can be remarkably successful. But if your organization is at the very beginning of a long, bumpy road to innovation glory, going top-down is the way to go.

Posted in Innovation | Tagged , , , , , , , , | 14 Comments

Innovativeness=Competitiveness

The Gldownloadobal Competitiveness Report 2014-15 is out. It’s described as an assessment of the “competitiveness of 144 economies based on 12 “pillars” which include institutions, infrastructure, health and education, labor market efficiency, technological readiness, innovation and business sophistication.” Here is the list of the countries that make the top 10 of the Global Competitiveness Index (GCI):

  1. Switzerland
  2. Singapore
  3. United States
  4. Finland
  5. Germany
  6. Japan
  7. Hong Kong
  8. Netherlands
  9. United Kingdom
  10. Sweden

Looks familiar? It should. The top of the Global Competitiveness Index looks remarkably similar to the top of the Global Innovation Index 2014 (GII) (I wrote about it recently). Here is the top 10 countries from this year’s GII:

  1. Switzerland
  2. United Kingdom
  3. Sweden
  4. Finland
  5. Netherland
  6. United States
  7. Singapore
  8. Denmark
  9. Luxembourg
  10. Hong Kong

Admittedly, there are two countries that made the first list, but not the second (Germany and Japan) and there are two countries that made the second, but not the first (Denmark and Luxembourg); other than that, the usual suspects are just trading places a bit, with Switzerland being the number one in both lists.

Do we need any extended comments? No. The conclusion is clear: to be competitive, countries must innovate. A shorter version of the same: innovativeness=competitiveness.

Posted in Global Innovation | Tagged , , , , , , , | Leave a comment

What Can Crowdsourcing Do?

I’m often askeimagesd questions about crowdsourcing. Usually, they’re revolving around this central theme: what can crowdsourcing do? Can crowdsourcing solve this problem? Can crowdsourcing solve that problem? On occasion, a more perceptive question is posed: can crowdsourcing define a problem?

My answer to all these questions is standard: yes, it can. What I want my interlocutors to understand is that crowdsourcing is first and foremost a question, a question that you ask a carefully selected crowd of people. And it doesn’t really matter what this question is about, for as long as it well-thought-out, properly defined and clearly articulated. Yes, it can be a question about a solution to a problem. Yes, it can be a question about a problem itself.

Two examples of using crowdsourcing in both incarnations came from the same organization, Harvard Medical School. The first example shows how HMS scientists used crowdsourcing to solve a problem. This particular problem was how to improve the capacity of a DNA sequencing algorithm employed in one of the HMS projects. (Let me skip technical details here because I wrote about this case only a few weeks ago.) HMS first decided to solve the problem in-house and indeed made a significant (5.5-fold) improvement in the algorithm capacity. But this wasn’t enough, and they launched a two-week-long crowdsourcing campaign. 122 algorithms from the outside of HMS have been submitted, and the winning solution provided a 1,000-fold improvement over the initial algorithm, a 180-fold improvement over the internal solution.

But a few years before, HMS put the crowdsourcing approach in reverse: they use it to define a problem. Specifically, they asked a question: what do we not know to cure Type 1 diabetes? The idea behind the question was that as every prominent scientific topic, the Type 1 diabetes research was following a limited number of popular directions, chasing essentially the same set of problems. HMS decided to ask members of the Harvard community, as well as general public, to identify “neglected” problems, the problems that for whatever reasons were off the radars of existing labs involved in Type 1 diabetes research. Essentially HMS wanted the crowd to come up with different, better problems, regardless of whether the crowd had the expertise or resources to solve these problems.

The results were quite impressive. Of total of about 190 entries to the contest, 12 were chosen as the most “out-of-the-box.” (Interestingly enough, among people submitting winning proposals was a diabetes patient, an undergraduate student, an HR representative and a researcher with no immediate expertise in the diabetes field.) Some of the most promising problems were later converted into bona fide research projects.

So when asked what one needs to run a successful crowdsourcing campaign, my answer is, only two things: a question and a crowd.

Posted in Crowdsourcing | Tagged , , , , , , , , , | 3 Comments

New Survey: Incentivizing Employees to Innovate

(This post originally appeared on Innovation Excellence)

More and moreto-be-or-not-to-be-thats-the-question-257x300 businesses view innovation as a new paradigm for achieving competitive advantage. Now businesses must focus on how to make the innovation process more effective. Experts and innovation practitioners agree that innovation programs thrive in organizations that have established a culture of innovation. They also agree that a culture of innovation is first and foremost a culture of employee engagement: for innovation programs to succeed, organizations must ensure employee participation in these programs.

A question then arises: should organizations incentivize employee participation in innovation programs? Opinions diverge on this issue. Some argue that innovation is based on creativity, and creativity relies mostly on intrinsic motivators, such as natural curiosity or thrill of solving a difficult problem. Extrinsic motivators, including financial incentives, can therefore do little to make a person more innovative. Others insist that innovation is not different from other business processes; consequently, established corporate incentives (formal and informal, monetary and non-monetary) should be used to reward and recognize innovation activities.

To gain more insight into this issue, Doug Williams of IX Research and I are conducting a study to evaluate whether and how employee engagement increases the efficiency of innovation programs. The research aims to answer the following key questions:

  • Does employee engagement have a positive impact on the success of innovation programs?
  • Do organizations provide incentive to employees to encourage participation in
    innovation programs?
  • What specific forms of recognition or reward do organizations use to encourage employee participation in innovation programs?

Whether you’re a seasoned innovation practitioner or just thinking about establishing innovation programs in your organization, we want to know your opinion. You can participate in the study by using the following link.  Those who complete the survey will receive a copy of the aggregate survey results. The data we gather will be used to develop an IX Research report that provides guidance and recommendations to corporate innovation teams, human resources departments, and C-suite executives about how best to engage employees in the innovation process.

Posted in Rewards and Recognition | Tagged , , , , , , , | Leave a comment

All Innovation Is Local

Picture1I like this phrase: all politics is local. Ascribed to the late Speaker of the U.S. House of Representatives Tip O’Neill this phrase means that all political decisions, regardless of their purpose and scale, must take into account the interests of local constituents, the folks who’re sending politicians into the offices.

In a sense, innovation is local too. What I mean by this is that no matter how many “general rules” of innovation we might invent, every industry and every field would modify these rules to fit its specific needs and circumstances. Take, for example, rapid prototyping. Everyone would agree that the efficiency of the innovation process can be dramatically improved by creating of a quick-and-dirty version of new product or service, allowing immediate assessment of their feasibility, scale-up potentials and, mostly important, attractiveness to end-users. Such a rapid prototyping not only speeds up the innovation pace; it also saves a lot of money by the timely weeding out failed options (remember the “fail fast, fail cheap” mantra?). Rapid prototyping has proven its worth in many cases, most notably, in software and consumer goods markets.

But is rapid prototyping feasible in every field? Unfortunately, not. Consider drug development. There are a number of research tools– in vitro and in vivo systems, animal models of human diseases, etc.–that could conceivably serve as “quick-and dirty” prototypes for testing the efficiency of candidate drugs. However, in such a tightly regulated business as drug development, this isn’t enough. The proof that a candidate drug works–and this is the only proof that matters to the FDA–comes as late as in the Phase III clinical trial. As I pointed out in my previous post, there are two extremely troubling aspects of Phase III clinical trials: first, they are shamefully expensive, costing up to a billion dollars, and, second, horribly failure-prone, with roughly 30-40% of Phase III clinical trials ending up in failures. Is it then surprising that while investment in pharmaceutical R&D has doubled since 1997, the number of approved drugs for the same period has been largely stagnant? Any approach that would allow reliable estimate of drug efficiency before Phase III clinical trial (rapid prototyping of sorts) would have dramatic positive effect on the whole process of drug development.

Or take the celebrated Voice of the Customer (VOC) approach. It’s a common place now to say that any successful innovation must start with identifying unmet customer needs. Or, wording this in more practical ways, when starting to design a new product or service, one must make sure that in the end, there will be consumers and customers willing to pay for this product/service. Consequently, a great number of techniques aimed at identifying what customers do or may want have been developed: market surveys, focus groups and ethnographic methods, to name just a few most popular.

The pharmaceutical industry uses the VOC approach too, but in a very specific way. In a sense, there is no need to identify unmet medical needs in drug development: diseases for which there is no cure are well known, and their list is, unfortunately, depressingly long. And, of course, there is no need to ask sick people whether they want to be cured. So the decision which specific disease should be attacked by a new drug development campaign isn’t made based on the customer wish. Instead, other factors are considered: the presence of a suitable (“druggable”) molecular target, available technology to reach this target, the size of the patient population expected to respond to the proposed drug, the anticipated willingness of the insurers to pay for the future treatment, etc. (Often, such decisions are clouded with interferences by external forces, such as patient advocacy groups; I wrote about this earlier.)

This is not to say that pharmaceutical companies completely refuse to listen to their customers. Over the past few years, companies have begun actively soliciting patient input into design of clinical trials–to make sure that the treatment protocols are patient-friendly enough to ensure high patient complacency. Transparency Life Sciences, which calls itself “the world’s first drug development company based on open innovation,” went a step further: it uses crowdsourcing to design clinical trial protocols for a number of indications.

So what is my point? My point is that as innovation management gets more matured, we should start creating customized sets of innovation tools appropriate for specific industries, disciplines and business functions. In other words, for innovation to realize its global appeal, it must go local.

Posted in Health Care | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

Should We Celebrate Failure Worth A Billion?

As every popular topic, innovation is a powerful magnet for clichés. Some of them obliterate more than illuminate. For example, I’m not sure that mixing innovation with DNA is a good idea. Though I kind of understand what Clayton Christensen and his co-authors had in mind when writing about “the innovator’s DNA” (“…each individual…ha[s] a unique innovator’s DNA for generating breakthrough business ideas”), I involuntarily cringe when reading that “successful innovation programs have a DNA consisting of seven elements.” Ouch, these days even kindergarteners know that DNA consists of only four elements!

Another innovation cliché that really rubs me is “celebrating failure.” Sure, we all know that innovation requires a lot of experimentation, and experimentation results in failures more often that it ends up in success. Absolutely, we must accept failures, learn from them and try again and again, until we succeed. But why do we need celebrate failures?

In every language, in every culture, the word “failure” carries distinct negative connotation, and placing it in the same sentence with “innovation” makes no difference. By calling to celebrate innovation failures, we might be announcing our belonging to the Sacred Society of Innovators (i.e. those with a unique innovator’s DNA), but do very little to advance innovation in places, still depressingly numerous, where the fear of failure keeps nipping innovation in the bud.

Besides, some innovation failures are so expensive that they give more reasons to mourn rather than to celebrate. Take, for instance, drug development. Even empowered with recent scientific breakthroughs, modern drug development still remains highly unpredictable business. The ultimate proof that a candidate drug does have clinical benefits (meaning that it may be approved by the FDA for therapeutic treatment) comes as late as in the Phase III clinical trial. It was calculated that it costs about $1.3 billion to develop a new drug and that 90% of these expenses represent the cost of Phase III clinical trials. Do we have any reason to celebrate a failure worth a billion dollars, given the fact that the failure rate of Phase III clinical trials reaches the mind-boggling 30-40% (and even higher in some therapeutic areas, such as cancer)?

We shouldn’t treat innovation as it’s any different from other activities. We live in a success-driven society. We should strive for success, and success only, be it innovation project, manufacturing process or safety of our borders. We should work very hard on decreasing the rate of failures in any of these endeavors–and, sure, it’s time to address the question of why drug development has become so inefficient and expensive. And we should reserve celebration for those occasions, however rare, when we succeed.

I’m even ready to consider this attitude an element of our innovator’s DNA.

Posted in Portfolio Management | Tagged , , , , , , , | 11 Comments

No Change On The Top Of Innovation Olympus

gii_2014_140

Recently, I wrote about the annual 2013 Global Innovation Index (GII 2013), composed by Cornell University, European Institute of Business Administration and World Intellectual Property Organization, that ranked 142 world countries’ innovation capabilities and achievements. I made a curious observation that the top of the GII 2013 list was almost identical to the top of the 2013 Freedom of the World Report ranking democratic credentials of the world’s countries. That means that the ability of a country to innovate strongly correlates with the presence in this country of developed democratic institutions. Or, putting this differently: to be innovative, you have to live in a free country.

Time flies, and the 2014 Global Innovation Index is out. This year, it profiles 143 countries using 81 indicators gauging elements of the national economies related to innovation activities (institutions, human capital and research, infrastructure, etc.).

So, what has changed on the Innovation Olympus since last year? Not much. Like a year before, Switzerland tops the list of the world’s innovation leaders, a position that this country of only eight million people has been holding since 2011. Four countries that were in the top six last year–the United Kingdom, Sweden, Finland and Netherland–are still there, just having swapped places with each other a bit.

The United States occupies the 6th position, dropping from the 5th in 2013. Does this support the often expressed opinion that the U.S. is losing its innovation edge? Not yet, at least as the GII is concerned: in 2011 the U.S. was in the 7th place and in 2012 even in the 10th.

Let’s wait and see, and not only for the GII 2015, but for other indexes and indicators as well. But for now, it seems that the state of our innovation is strong.

Posted in Global Innovation | Tagged , , , , , , | 1 Comment

Show Me The Money!

(Thdownloadis post originally appeared on Danish Crowdsourcing Association website)

I strongly believe that as an open innovation tool, crowdsourcing has a bright future, but only if it proves its economic worth. In other words, when properly designed and executed, a crowdsourcing campaign should be able to solve a problem in a more cost-effective way than other tools.

This proof, however, doesn’t come easy, as economic analyses of crowdsourcing campaigns, whether successful or not, usually aren’t publicly available. Yet, fortunately, such data do appear in the open time to time–and they’re nothing short of spectacular.

In 2010, Forrester Consulting published a case study describing open innovation program at a large multinational agricultural company Syngenta. The study analyzed the total economic impact and return on investment (ROI) Syngenta had realized by using a crowdsourcing platform provided by InnoCentive, an open innovation intermediary. The Forrester’s analysis identified a number of benefits gained by Syngenta from this cooperation, including cost savings from finding solutions to difficult R&D problems, productivity savings for Syngenta’s researchers and reduction in intellectual property transfer time. The total value of these benefits was estimated at $11,861,688 over three years. Given that the total cost of using InnoCentive services over the same period amounted to $4,200,567, Forrester calculated that a three-year, risk-adjusted ROI for Syngenta was 182%, with a payback period of fewer than two months. Isn’t it cool?

More recently, a piece of data that would make crowdsourcing fans really happy came out of Harvard Medical School. In one of their research projects, HMS scientists employed the  DNA sequencing algorithm with a capacity of processing 100,000 sequences in 260 minutes. This was way too slow, and in order to improve the efficiency of the algorithm, HMS hired a full-time developer with the annual salary of $120,000. The developer did lower the processing time to 47 minutes, a 5.5-fold improvement, but this was still not fast enough. HMS then launched an open innovation contest that ran for two weeks and offered $6,000 in prize money. 733 participants took part in the competition, and 122 of them (representing 69 countries) submitted algorithms. The winning solution was capable of doing the desired job in 16(!) seconds, a 1,000-fold improvement over the MegaBLAST algorithm and a 180-fold improvement over the internal solution. Given a 20-fold difference in costs ($120,000 vs. $6,000), the HMS crowdsourcing campaign was overall 3,600-fold more cost-effective than the internal solution. Let me repeat: 3,600-fold more cost-effective!

However, as I said in the beginning, good things could happen only when a crowdsourcing campaign is well designed and skillfully executed. And here is where the problem with crowdsourcing seems to reside: many organizations simply lack an appropriate expertise. But this is a topic for another conversation.

Posted in Crowdsourcing | Tagged , , , , , , , , , | 2 Comments

A silver bullet that was not

imagesThere are lies, damned lies and statistics

Alzheimer’s disease (AD) is a horrible illness. A progressive neurodegenerative disorder that destroys memory, abstract thinking and cognitive function, it’s the most common cause of dementia in humans, affecting as many as five million Americans every year and being the sixth leading cause of death. The risk of getting affected by AD doubles every five years after the age of 65, meaning that with 10,000 baby boomers turning 65 every day, the number of Americans with AD may reach 14 million by 2050. The havoc AD wreaks on the life of affected people is enormous, and because AD patients require 24/7 care, it also profoundly changes the lives of their families. More to that, AD is damned expensive, adding more than $220 billion to the U.S. healthcare bill in 2013 alone.

There is no real cure for AD. Five drugs that were approved in the U.S. for AD treatment (the last in 2004) can only offer a brief respite from some symptoms in some people. Even more worrisome, the development of new AD therapies has so far been a total disaster: over the period of 2002-2012, 244 drug candidates have been assessed in 413 clinical trials, but only one has been approved by the FDA. This is a 0.4% success rate or, if you prefer, a 99.6% failure.

The main problem is AD diagnosis: similar to cancer, AD is often detected only after it has already done an irreparable damage to the patient. For this reason, many experts believe that the key to finding effective AD cure is in identifying reliable biomarkers, molecules that could signal the imminent onset of the disease before the pathological symptoms of it became evident. Obviously, to be useful in clinical setting, such molecules should be present in easily available body fluids, such as blood. Not surprisingly therefore, the quest for a “blood test for Alzheimer’s disease” has become one of the holy grails of the medical field.

This explains the close attention to a recent article published in the journal “Alzheimer’s & Dementia.” A team of 27 researchers claim having identified 10 plasma proteins that could predict the disease progression from a pre-AD condition to a full-blown AD within a year of blood sampling. Promptly, BBC jumped in and declared the study a “major step” towards AD blood test.

It now appears that the jubilation has been somewhat premature. DrugBaron, an influential blog covering biopharmaceutical industry, has posted a piece that tears the above study apart. Having pointed out to some minor deficiencies in the study design, DrugBaron focuses its criticism on the statistical treatment of the study results, arguing that the authors’ use of multivariate statistical analysis is questionable at best and outright wrong at worst. The piece’s conclusion is that the proposed cohort of protein biomarkers has no predictive power whatsoever.

Given the complexity of multivariate statistics, DrugBaron doesn’t blame the BBC or other news organizations for running “breakthrough” stories based on shaky science. But it reserves harsh words for the authors of the study, “Alzheimer’s & Dementia” reviewers and “AD experts” interviewed by the BBC, who should have known better and yet chose to ignore obvious deficiencies of the study.

As for those waiting for a “blood test for Alzheimer’s disease,” the wait still continues. And the wait for AD cure continues too.

Posted in Health Care | Tagged , , , , , | Leave a comment