Thursday, July 16, 2009

In defense of the big picture

"Confusion of goals and perfection of means seems, in my opinion, to characterize our age" -- Albert Einstein

Someone needs to defend the wisdom of looking at the big picture, so I am going to do just that. How many times do people want to look at the forest, but start looking at trees, then at the leaves of the trees, then at the veins on the leaves? The problem is that while leaves and veins may be fascinating, the forest may be shrinking while you are looking at them, maybe even due to logging. Well, hope, not that severe.

My analysis du jour was looking at a very clever and nicely sampled test that I had devised several months ago. The test has not survived the latest iteration of never ending organizational change, and had to be prematurely ended after a few months in the market. I decided to take a closer look at the results anyway - testing, but never analyzing/making conclusions is one of my pet peeves.

In the test, the target customer universe is randomly split into several groups, and each one of them is delivered a certain dose of our marketing poison (kidding, it's of course marketing manna). Basically, full dose, half-dose, and quarter dose. A few months later, I am looking at the results to understand what happened. What we really want to look at first, is whether consumer sales grew during that period of time - and by how much/how long, not the details of how it grew. That's because at the end of the day, if you are not growing your subscriber/product/sales base, and get more money out of it than you are putting in, nothing else matters. Obviously, the first question I get, is how the subscriber base grew - was it increase in connects, was it drop in disconnects, or a combination of those - because anyone in marketing automatically thinks that they only need to care about connects. Well, plainly speaking, that's wrong. Higher connects usually lead to higher disconnects as certain (and actually surprisingly high) percentage of customers are going to disconnect within the first month or two from the connect. Those disconnects are a direct result of the connects you are driving, and it would be incorrect to count all connects in. On the other side, if higher marketing dose results in lower churn, I will still take it - I really don't care why applying marketing reduces churn, what I care is being able to experimentally confirm that it does and by how much.

Now, I should admit that knowing a certain amount of detail may help you chisel some helpful insight, however, many times it is hard to nip the tendency to evaluate the end result of a program based on that detail. If the bottomline question about a program is whether it worked (aka paid for itself), then this conclusion should be drawn from the bottomline, most "big picture" number. In our particular case, after all the connects, disconnects, upgrades, downgrades, and all sorts of other moves, what difference we are left with, and for how long. The "what" comes first. For how long comes second, and let's not kid ourselves, that "how long" is usually not the lifetime value. LTV and it's [mis]use for campaign evaluation is a totally different topic, which I hope to write about pretty soon.

No comments: