“Failing fast and often” has become an innovation mantra. Of course, not everyone takes this wisdom at face value. Even more tellingly, no one has taken the trouble to explain what “fast” and “often” precisely mean when applied to failure.
Now, some scientific data seems to have emerged, thanks to the team led by Robert C. Wilson from the University of Arizona, Tucson. Dr. Wilson and his colleagues examined the role of difficulty of training on the rate of learning. They found that the rate of learning is maximized when the difficulty of training is adjusted to an optimal level. They further found that the maximum learning takes place when the optimal training accuracy (a measure of difficulty) is about 84% or, conversely, when the optimal rate of training error is around 16%. In other words, one should be five times more right than wrong to learn successfully.
Sure, I understand the difference between innovation and the learning process “in case of binary classification tasks and stochastic gradient-descent-based learning rules” studied by Dr. Wilson’s team. Sure, I understand that innovation is a lot of experimentation, and experimentation implies a lot of failures.
What I don’t understand is our obsession with “failure,” with treating it as an end, not a means, of the innovation process. (And I definitely refuse to celebrate failures.) What I don’t understand is our willingness to replace the data-driven innovation discovery with a primitive A/B testing.
In order to succeed in innovation, we need a few things preceding experimentation. We need innovation strategy; we need innovation processes; we need innovation metrics, training, and incentives. That what will make our experimentation more efficient and repeatable than winning in a lottery.
Image is taken from the article by Wilson et al. (2019)