In one of my previous posts I wrote that when facing a problem the majority of organizations have a natural inclination to begin the problem-solving process with engaging experts. Such an approach makes sense when an organization dealt with a similar problem in the past and knows people who could potentially solve a new problem too. (Actually, many large organizations have a host of pre-selected consultants for each area of strategic interest.)
But what if you face a problem you haven’t met in the past–and in our fast-changing environment this happens more and more often to more and more organizations? Where will you go to find experts in this uncharted territory? How will you know that the experts you’re going to hire are really good? And even more basic question: how do you know that someone is or is not an expert in this particular field?
The answer to the last question may look deceptively simple. Well, all experts are supposed to be present on LinkedIn or any another of a plethora of similar professional networks. You go there, type appropriate “key” words and, bingo, here is a list of everyone who might be considered an expert.
As a curious example of the perils of such a “targeted” search, here is the story of the 2006 ALS Biomarker Grand Challenge sponsored by Prize4Life, a nonprofit organization dedicated to finding a cure for Amyotrophic Lateral Sclerosis (a.k.a. Lou Gehrig Disease). The purpose of this crowdsourcing campaign (run on the online platform provided by InnoCentive) was to find an effective biomarker that could measure the disease progression (or regression in case of clinical trials) in ALS patients.
In 2009, Prize4Life awarded two “progress prizes” for solutions that had made the most significant progress towards meeting the final criteria of the challenge. One winner was Dr. Seward Rutkove, a neurologist at Beth Israel Deaconess Medical Center in Boston and a prominent researcher in the field of neuromuscular disorders, such as ALS. (Dr. Rutkove went on to win the $1 million Grand Prize in 2011.) But the other “progress prize” was awarded to Dr. Harvey Arbesman, a doctor in private practice in a suburb of Buffalo and someone virtually unknown in the ALS community. And why is that? Because Dr. Arbesman was a…dermatologist with no formal ties to the field of neuromuscular diseases. Although Dr. Arbesman’s biomarker did not fully meet the Challenge criteria, the sponsor of the Challenge immediately appreciated the potential of this biomarker in providing valuable insight into the fundamental mysteries of the disease.
Was there any chance for Dr. Arbesman to be selected as an expert by any organization willing to start working on finding ALS cure? No. But Prize4Life didn’t start with selecting experts; it started with formulating a problem to be solved and then talking to everyone who showed proven capability to solve this particular problem, regardless of their formal expertise, medical certification or place of employment. Prize 4Life didn’t go around looking for a solution. Instead, they announced that they had a problem and then waited until right solution would find them.
As a result, Prize4Life went from literally nothing to a fully validated ALS biomarker in a matter of 3-5 years, a feat that normally requires at least twice of that to perform. And this is not a small matter, given that that most ALS patients die within 2-5 years of diagnosis.
That is what I mean by saying that a properly designed crowdsourcing campaign is very cost-effective.
Image credit: Viktor Vasnetsov, “Knight at the Crossroads” (1878)
The easiest and most accurate way to ensure an expert is actually qualified is to use the basic principles of information science. If you need a Doctor with a high level of expertise in a certain type of medicine, you write to five doctors in the field, that you are confident of based on any initial screening that seems reasonable to you (the non-expert). Then write and ask for written recommendations from each doctor who responds. When you get their response, you write the same letter to each of the Doctors that got an initial recommendation. With three to seven iterations, you can pretty much vet anyone in any field of knowledge in an objective manner. Certainly you can vet a large enough number of professionals to be certain that most of them are board certified, experienced, and are not pulling the wool over your eyes. It’s just probability and statistics. The math is never wrong, and the margin of error with any random sample of 30 or more Doctors is guaranteed to be +/- .001%
Pingback: When many experts are too many |
Pingback: The wisdom of crowds in a flash |