My colleague Ben Gilad posted a terrific essay on the shortcomings of “data-driven” decision-making.
I love data. I spent 15 years playing with the PIMS database alongside luminaries such as Michael Porter, Sidney Schoeffler, Robert Buzzell, and more. Many years later, I can run a few hundred million simulations before breakfast and tell you what they mean before the coffee’s temperature drops to 80 degrees F. I told you, I love data.
But data isn’t learning, and learning isn’t just about the amount of data. Is one data point enough for learning? How about a trillion?
When smart people fail, their failures are often (not always) because what they think they know is wrong. There’s often a deeper failure, a failure of knowledge, learning, and framing, rather than a failure of execution or data.
To see why, let’s look at rats.1
Imagine a T-shaped maze. A rat enters at one end of the T. It scurries along the T and soon faces a choice: go one way, or go the other.
If the rat goes one way, it gets a piece of delicious cheese. If it goes the other way, it encounters an irrevocably fatal electric shock. See illustration. (Note to the nervous: this is a thought experiment. No real rat was harmed in the writing of this essay.)
That mythical experiment parodies a serious idea, one-trial (one data point) learning, proposed by Edwin Ray Guthrie (1886-1959), a psychologist at the University of Washington.2 The joke, of course, is that a surviving rat hasn’t really learned. It just chose the right path by chance.
That’s all very amusing and/or academic, except not for the rat. As it turns out, though, one-trial learning can teach us something about decision-making.
If the rat survives, what has it learned? The answer is simple: nothing. The rat has no idea what would have happened otherwise. (For a different reason, neither does a rat that turns the other way.) The rat has no idea how the maze works, let alone that there is a maze.
After only one trip through the T, the rat hasn’t even had the opportunity to learn. The rat has only an anecdote, and perhaps a fine future career giving squeaky inspirational speeches. (Careful: “who moved my cheese” is already taken.) It — or we, when we’re the ones in the maze — only thinks it’s learned.
If one trial isn’t enough, however, how many does it take for us to know something? Too many. We don’t usually have enough time, let alone money or appetite for risk, to run clinical trials on our businesses. Exception: test markets, but even then, we never really know.
Fortunately, there are options between one-trial learning and too-many-trials knowing. That option is competing as a skill, as opposed to competing by counting on luck or competing by looking for certainty. We can greatly improve the quality of our strategy decisions at costs in time and risk that we can bear. Or, as a data-lover might put it:
For two examples, see Triple Sales! Go Upscale!, by Ben and me. That essay tells the stories of two companies that shunned one-trial learning. They opted instead for competing as a skill.Where the stakes are high and the possibilities are plentiful, there is simulation, computer-based and/or role-playing (e.g., war games, which both Ben and I have applied many times). Those techniques can explore scenarios too risky to test in real life, uncover surprises in a safe environment, and offer rigor and impartiality. They’re not perfect but 1) nothing is, and 2) they’re a lot better than squeaky inspirational speeches.3
By contrast, there’s the frenzied world of apps that I discussed in Strategy in the Modular Economy. In that world there’s no such thing as a test market; the test is the same thing as a launch. Companies jostle to enter the one-trial maze first. After we see which companies have succeeded and which have failed — mostly the former, since most of the latter don’t achieve visibility — we believe the stories of their inevitable success in finding the cheese.
Sometimes time is too short for simulation or war-gaming, and the problem is merely whether we’re ready to take advice from anecdotes. For those situations, try a variant on the headline test: turn the advice around and see if the opposite is possible too. For example, consider “be first to market if you want to win.” Can you win if you’re not first? (In some markets, arguably yes; in others, arguably no.) If you can win without being first, then “be first” is a strategy option, not an obligation. And when you spot options, you spot opportunities for competitive advantage. By definition, competitive advantage comes from doing something valuable that your competitors are not doing.
You can be a furry hero telling squeaky stories, or you can understand the maze. Don’t worry about who moved your cheese yesterday. Your job is to find the cheese today.
1 You think people don’t find rats exciting? Put a rat in a room full of people and see what happens.
2 For more, see AlleyDog.com, “psychology students’ best friend.”
3 I came across an essay on LinkedIn that, while otherwise thoughtful, said a 60% accuracy rate in forecasting is poor. True, we’d like to do better than 60%. On the other hand, if the alternative is 50/50 guessing, then 60% is an advantage you can take to the bank. For comparison, casinos win a few percentage points over 50% when a gambler spins the roulette wheel (the house edge is between 2.7% and 5.3%).