If you wanted to determine who won a footrace, the easiest measurement would be to rank the runners based on the time it took them to cross the finish line. While that measurement may seem strikingly obvious, it still doesn’t fully appreciate the variety of different standards you could use to look at the same event. In fact, most runners would consider running a race faster than their “personal record” a great accomplishment. Or, if the footrace was a fundraiser for a particular cause, it seems only fair to consider which participants actually brought in the most money. Interestingly enough, for one wacky race called the Krispy Kreme Challenge, any runner who can stomach a dozen doughnuts and then run 2.5 miles is considered a winner!
Measuring success may seem basic, but unfortunately many organizations that are implementing analytics are so fixated on implementing that they forget to measure the results or do not measure results accurately.
I worked with a large retailer that made a huge investment in analytics tools and technology. It planned on designing “incremental lift” models. These models don’t predict who will buy; they predict who you can influence most to buy. The team developed a good methodology for selecting the customers it could most influence with mailings.
Prior to implementing the incremental lift models, the retailer had used standard “propensity to shop” models. When it moved to the new methodology, the team didn’t update the way it measured conversions. Within the first few months of implementing the new modeling methodology, conversion rates plummeted! The analytics team knew why – people who would have shopped regardless of the mailer were not being counted as conversions. But the management team didn’t understand this concept. The incremental lift method (and the team’s investment) was abandoned without these two critical pieces:
- A strong champion in place to explain the difference to the management team
- A change in the measurement that would have cleared up the problem
Also keep in mind that measuring a change in performance doesn’t necessarily mean that the models did what you wanted them to. Sometimes models have a deterrence effect. Other times models pick up on things people would have done anyway. And finally, the exclusion of explanatory variables may ruin otherwise good analysis. For example, measuring student performance on standardized tests is best done if we compare the same student performance over time and not measure him/her at one point in time.
Finding the right measurement standard may be a bit of a challenge, but it’s essential for unlocking the full value of your results. After all, the last thing we would want to do as analysts would be to discount the success of an athlete who can run a couple miles fueled by a dozen glazed doughnuts.
Next week I’ll address the potential need to change processes and incentives in the Myths and Realities of Successful Analytics series. Want to know more about incremental lift models? Check out this webinar or white paper on the topic.