Wednesday, July 29, 2009

There is nothing more misleading than facts with no context

I have read an excellent article the other day about the value of good analysis, and the pitfalls of the bad. It researches WSJ's claim that increasing taxes on the people in the highest income bracket in Maryland led to their flight to other places within a year. It did not take into account any national trends of wealth and income, or any other measures "the rich" may take to lower their tax base (munis anyone?), and simple attributed the 50% decline in returns filed in the corresponding bucket to the fictional flight. However, a simple national trend analysis showed that Maryland was sitting close to the middle in terms of dynamics of wealthy households.

P.S. I put quotes around "the rich" because I believe the whole the rich vs the poor battle is pretty much made up, especially, on taxation. We all heard Warren Buffet advocating higher taxes for "the rich", and $40K-a-year Joe the Plumber being outraged by out of control taxes Democrats were allegedly ready to put into the law. My take, saying things like that makes Joes feel like they are one of the rich. Much like buying a Louis Vuitton bag with credit - not a particularly rational economic behavior, but what the heck, there was a time I was working for a company that successfully translated this behavior into a nice chunk of change, and studying it made a fascinating subject of research. But I digress.

So, do you think our CSRs are hot?

"If we knew what it was we were doing, it would not be called research, would it?" -- Albert Einstein

Let's talk about customer satisfaction research, and in particular, drivers of customer satisfaction. This is probably one of the most sacred grounds of satisfaction analysis, and every company that offers customer sat research is usually pitching some sort of proprietary procedure or knowledge on how to get those key drivers out of the survey data. The same story refers to NPS, loyalty, or pretty much any number of more expensive measures of the same thing.

Usually, these procedures are based on some sort of correlation between overall satisfaction (NPS, loyalty) and the drivers. Some use fancier math, some use simple math, but the idea is pretty much the same. To prove that the idea is working well, your MR vendor will create a bunch of pretty charts, show you statistically significant p scores, and what not. Now, every time I see a chart that is a bit too perfect, I get a nagging feeling of suspicion - is it really happening, or we are dealing with a self fulfilling prophecy again?

Fortunately, one day I got my answer. As I mentioned before, at some point in my career I was in charge a customer sat survey, and it had one of those drivers that makes you sigh - satisfaction with store hours. So, whenever I ran the correlation between the overall sat and the drivers, I would always see that nice positive correlation with the store hours. Must be one of those important factors, right? Well, turns out, the correlation held even for the flagship stores, which were open 24 hours. I don't know how one may be dissatisfied with a 24-hour store hours, but apparently, if you piss the customers off enough, they will be. They will also think your signage colors are hideous, parking spaces are too small, and CSRs are ugly. Obviously, it is not the store hours that drive overall satisfaction, but the other way around. If anything, those bogus questions are going to correlate with overall sat very well, as they really don't reflect anything else but the overall satisfaction.

Now, every time I answer a customer sat survey (yes, I take other companys' surveys - guilty as charged), I always laugh when faced with a million dimensions, half of which I have absolutely no opinion on, except... well, they are pretty good, so I guess I am "satisfied" with the advice and information they give me.

What's the conclusion? I guess, the conclusion is that in the context of customer sat, those drivers are not of much help. There are other ways to understand what's important to your customers, and by all means you should employ them in an intelligent manner. Should your satisfaction really grow if you change those signage colors? There is a sure way to find out - change them and see if satisfaction budges. If not, move on to another variable.

Thursday, July 16, 2009

Noteworthy recent HBR articles

"Any man who reads too much and uses his own brain too little falls into lazy habits of thinking." -- Albert Einstein

This is not new news, but there was a new article by Thomas Davenport (the author of "Competing on Analytics") in February 2009 issue of HBR called "How to Design Smart Business Experiments". I actually read it, and I liked it better than his iconic article that was eventually tuned into the book. The most recent article shows a very practical approach, and it is executive-proof. I made copies and distributed at work with an obvious goal to educate my co-workers about smart testing.

July-August 2009 issue of HBR features an interesting article by Dan Ariely "The End of Rational Economics", where he gives interesting examples of irrational economic behavior, and which I read. Not surprisingly, Ariely also has a book on the topic "Predictably Irrational", and a web site with a good amount of information and interviews. Obviously, I am going to recommend checking out the web-site first.

P.S. I am not a great reader (of books), so unless it is explicitly stated that I read something, you can assume that I skimmed the ideas in reviews and thought they were worthy of notice.

In defense of the big picture

"Confusion of goals and perfection of means seems, in my opinion, to characterize our age" -- Albert Einstein

Someone needs to defend the wisdom of looking at the big picture, so I am going to do just that. How many times do people want to look at the forest, but start looking at trees, then at the leaves of the trees, then at the veins on the leaves? The problem is that while leaves and veins may be fascinating, the forest may be shrinking while you are looking at them, maybe even due to logging. Well, hope, not that severe.

My analysis du jour was looking at a very clever and nicely sampled test that I had devised several months ago. The test has not survived the latest iteration of never ending organizational change, and had to be prematurely ended after a few months in the market. I decided to take a closer look at the results anyway - testing, but never analyzing/making conclusions is one of my pet peeves.

In the test, the target customer universe is randomly split into several groups, and each one of them is delivered a certain dose of our marketing poison (kidding, it's of course marketing manna). Basically, full dose, half-dose, and quarter dose. A few months later, I am looking at the results to understand what happened. What we really want to look at first, is whether consumer sales grew during that period of time - and by how much/how long, not the details of how it grew. That's because at the end of the day, if you are not growing your subscriber/product/sales base, and get more money out of it than you are putting in, nothing else matters. Obviously, the first question I get, is how the subscriber base grew - was it increase in connects, was it drop in disconnects, or a combination of those - because anyone in marketing automatically thinks that they only need to care about connects. Well, plainly speaking, that's wrong. Higher connects usually lead to higher disconnects as certain (and actually surprisingly high) percentage of customers are going to disconnect within the first month or two from the connect. Those disconnects are a direct result of the connects you are driving, and it would be incorrect to count all connects in. On the other side, if higher marketing dose results in lower churn, I will still take it - I really don't care why applying marketing reduces churn, what I care is being able to experimentally confirm that it does and by how much.

Now, I should admit that knowing a certain amount of detail may help you chisel some helpful insight, however, many times it is hard to nip the tendency to evaluate the end result of a program based on that detail. If the bottomline question about a program is whether it worked (aka paid for itself), then this conclusion should be drawn from the bottomline, most "big picture" number. In our particular case, after all the connects, disconnects, upgrades, downgrades, and all sorts of other moves, what difference we are left with, and for how long. The "what" comes first. For how long comes second, and let's not kid ourselves, that "how long" is usually not the lifetime value. LTV and it's [mis]use for campaign evaluation is a totally different topic, which I hope to write about pretty soon.

Tuesday, July 14, 2009

Score!

"Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction." -- Albert Einstein

We have all heard of NPS, the Net Promoter Score. It is supposed to be the holly grail of loyalty, and a great alternative to your regular old tired Satisfaction Score. Maybe.

I guess it is time to share my experience and call it as I have seen it.

Numero uno - the absolute NPS score. I worked in a couple of consumer industries in marketing analytics, retail and telecommunications. If you pulled any research done... ever, you will see that retail, generally, has high satisfaction scores, and telecom - not so much. In fact, retail often gets over 60% in top two (on 10-point) or top box (on 5-point scale) . Now, if you ever look at cross shopping patterns in retail, you will see how fickle the customers are. I worked for a retail company where around 90% of its best customers cross-shopped with competitors. Yet, it had an NPS of well over 50%. I have worked for a telecom company that had an NPS way down at the bottom of the scale. I should admit that it has not always treated customers perfectly, however, customers were surprisingly loyal to their services. So, yes, it's all relative.

Numero dos - NPS and other scores. As a part of my former job I was in charge of the company's satisfaction survey. I hated the things I had to do to maintain it, but I loved the results, especially the given sample size that was well into the hundreds of thousands. Obviously, at some point it came to the NPS measurement, and as a sucker for general understanding of the nature of things, I did test that NPS against the overall satisfaction (top box on 5-point scale). Does not matter how you cut it, by weeks, by months, by stores, by regions... the NPS was 97% correlated to the Top Box. I do not know how much of the groundbreaking insight was packed in the remaining 3% of the information (OK, it's more like 6% in terms of variable variation), but I highly doubt it is going to change my view of what's going on with the customers.

So, my conclusion, basically, is that NPS is the same as good ole Satisfaction Score, freshly repackaged, and obviously, more expensive.

P.S. Next time let's talk about "drivers" of satisfaction.

Customers who switched from [...], saved...

"We can't solve problems by using the same kind of thinking we used when we created them" -- Albert Einstein

We all saw them, the ads that promise you to save money on your car insurance. Geico does it, Allstate does it, 21st Century does it. We look at them and think that maybe the better deal is around the corner. OK, maybe we are not that gullible, so let's dig into the numbers.

The claim is that customers, who switched from [another insurance company] on average have saved $X, and those who switched from [the other insurance company] saved even more, $Y. Sounds like a good deal; sounds like everybody is saving. But is it really everybody? Those who switched by definition must have had a lower rate with the new company, otherwise, they... would not have switched. This is typical case of not looking at the total picture, but using a qualifying condition to isolate the part of picture that we will be looking at. In this particular case, it is caused by self-selection, since the customers self select to switch.

So, basically, if we have an insurance company A that for 90% of the insurance seeking population on charges more than insurance company B, but for 10% of the population charges less than A, on average, A will be a higher priced alternative. However, it still will be able to make a low price price against company B because, yes, it is correct, when people in the 10% group switch from B to A, they do indeed save money.

Now, this was kind of a silly case. We all know advertisers will tell everything and anything to get prospects interested. However, this type of self fulfilling prophecy is being used every day at the workplace to justify programs - and justify them with what appears to an untrained eye to be solid quantitative analysis. The most upsetting case of selection bias that I have seen was a program where customers "competed" for a prize from a company. Those who have increased their purchases most during a qualifying period of time were declared winners, and then their purchasing lift during that period of time was used to justify the program. Basically, it's like the race was judged based on speed, and then the winners were compared to everyone else and declared that they... had the highest speed. Obviously, the program has always "delivered".

Let's get it started!

"It's not that I'm so smart, it's just that I stay with problems longer." -- Albert Einstein.

I decided to start this blog so that I can write down and organize my thoughts on analytics in general and marketing analytics in particular. To take interesting problems and observations, and not barge right pass them, but stay with them longer, try to understand what they mean, what they are trying to tell us about the nature of things. I have worked in the area of marketing analytics for about eight years, and to be honest, I really like it. Maybe, I will even have a few visitors to kick the thoughts around and have some fun.

Use of quotes. It was my decision to draw on the bits of wisdom from other analytical people, but since I don't read much, I decided to pretty much stick with Einstein. Not in hopes that the glory of his great mind rubs on my blog and people would think I must be smart, but basically because he was so prolific, that one quick search turned out everything I needed. Plus, it looks like I pretty much agree with him on... everything. Kind of scary, actually.

Well, wish me good luck!