In the first post, I commented on the quote
“It’s like an arms race to hire statisticians nowadays,” said Andreas Weigend, the former chief scientist at Amazon.com. “Mathematicians are suddenly sexy.”Comments on this can be seen in Part I here.
In this post, the next portion of the article I found fascinating can be summarized by the section that says
Habits aren’t destiny — they can be ignored, changed or replaced. But it’s also true that once the loop is established and a habit emerges, your brain stops fully participating in decision-making. So unless you deliberately fight a habit — unless you find new cues and rewards — the old pattern will unfold automatically.Habits are what predictive models are all about. Or putting as a question, "is customer behavior predictable based on their past behavior?" The Frawley, Piatetsky-Shapiro, Mattheus definition of knowledge discovery in databases (KDD) as follows:
Knowledge discovery is the nontrivial extraction of implicit, previously unknown,and potentially useful information from data. (PDF of the paper can be found here)This quote has often been applied to data mining and predictive analytics, and rightfully so. We believe there are patterns hidden in the data and want to characterize those patterns with predictive modeols. Predictive models usually work best when individuals don't even realize what they are doing so we can capture their behavior solely based on what they want to do rather than behavior influence by how they want to be perceived, which is exactly how the Target models were built.
So what does this have to do with the NYTimes quote? The "habits" that "unfold automatically" as described in the article was fascinating precisely because predictive models rely on habits; we wish to make the connection between past behavior and expected result as captured in the data that are consistent and repeatable (that is, habitual!). These expected results could be "is likely to respond to a mailing", "is likely purchase a product online", "is likely to commit fraud", or in the case of the article, "is likely to be pregnant". Duhigg (and presumably Pole describing it to Duhigg) characterizes this very well. The behavior Target measured was shoppers purchasing behavior when they were to give birth some weeks or months in the future, and nothing more. They had to apply broadly to thousands of "Guest IDs" for models to work effectively.
The description of what Andy Pole did for target is an excellent summary of what predictive modelers can and should do. The approach included domain knowledge, understanding of what predictive models can do, and most of all a forensic mindset. I quote again from the article:
"Target has a baby-shower registry, and Pole started there, observing how shopping habits changed as a woman approached her due date, which women on the registry had willingly disclosed. He ran test after test, analyzing the data, and before long some useful patterns emerged. Lotions, for example. Lots of people buy lotion, but one of Pole’s colleagues noticed that women on the baby registry were buying larger quantities of unscented lotion around the beginning of their second trimester. Another analyst noted that sometime in the first 20 weeks, pregnant women loaded up on supplements like calcium, magnesium and zinc. Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date." (emphases mine)To me, the key descriptive terms in the quote from the article are "observed", "noticed" and "noted". This means the models were not built as black boxes; the analysts asked "does this make sense?" and leveraged insights gained from the patterns found in the data to produce better predictive models. It undoubtedly was iterative; as they "noticed" patterns, they were prompted to consider other patterns they had not explicitly considered before (and maybe had not even occurred to them before). But it was these patterns that turned out to be the difference-makers in predicting pregnancy.
So after all my preamble here, the key take-home messages from the article are:
1) understand the data,
2) understand why the models are focusing on particular input patterns,
3) ask lots of questions (why does the model like these fields best? why not these other fields?)
4) be forensic (now that's interesting or that's odd...I wonder...),
5) be prepared to iterate, (how can we predict better for those customers we don't characterize well)
6) be prepared to learn during the modeling process
We have to "notice" patterns in the data and connect them to behavior. This is one reason I like to build multiple models: different algorithms can find different kinds of patterns. Regression is a global predictor (one continuous equation for all data), whereas decision trees and kNN are local estimators.
So we shouldn't be surprised that we will be surprised, or put another way, we should expect to be surprised. The best models I've built contain surprises, and I'm glad they did!
3 comments:
Useful information ..I am very happy to read this article..thanks for giving us this useful information. Fantastic walk - through. I appreciate this post.
One of the reasons what Andy Pole did is excellent because, as you pointed out, they "noticed" if the outcome of the data analysis makes any business sense. Many of us end up giving over emphasis on the model to fit the data. This leads to the issues such as over-fitting and end up producing results far from business.
That's why it's important to validate the outcome instead of simply relying on the mathematics.
You know, a bit of heuristics produce better business sense than a completely algorithmic approach.
A well designed statistical model can answer many business questions, but one must be cautious not to extrapolate the model beyond its limit.
Akash--couldn't agree with you more. I just got through building models for a customer where the final solution was not sets of bagged trees (as I intended), but those along with a heuristic set of rules I discovered along the way that predict well on a 10% chunk of the population.
The point is this: the math is our aid, and not our goal (s you point out). The problem with the math is that our objective is never really based on the Gini index, entropy, average squared error, or even AUC. These are metrics we use as surrogates because the math works well with the algorithms.
So, I always do a final assessment of models based on a business objective. For example, an objective may be how many more responders does the model identify than the existing practice, and how does that relate to near-term and long-term $$. I can't even begin to describe how many times there have been surprises here: models that look great on paper don't necessarily perform well with the business-centric evaluation.
Post a Comment