Sunday, December 06, 2015

Predictive Modeling Skills: Expect to be Surprised

Excerpted from Chapter 1 of my book Applied Predictive Analytics, Wiley 2014
Conventional wisdom says that predictive modelers need to have an academic background in statistics, mathematics, computer science, or engineering. A degree in one of these fields is best, but without a degree, at a minimum, one should at least have taken statistics or mathematics courses. Historically, one could not get a degree in predictive analytics, data mining, or machine learning.
This has changed, however, and dozens of universities now offer master’s degrees in predictive analytics. Additionally, there are many variants of analytics degrees, including master’s degrees in data mining, marketing analytics, business analytics, or machine learning. Some programs even include a practicum so that students can learn to apply textbook science to real-world problems.
One reason the real-world experience is so critical for predictive modeling is that the science has tremendous limitations. Most real-world problems have data problems never encountered in the textbooks. The ways in which data can go wrong are seemingly endless; building the same customer acquisition models even within the same domain requires different approaches to data preparation, missing value imputation, feature creation, and even modeling methods.
However, the principles of how one can solve data problems are not endless; the experience of building models for several years will prepare modelers to at least be able to identify when potential problems may arise.
Surveys of top-notch predictive modelers reveal a mixed story, however. While many have a science, statistics, or mathematics background, many do not. Many have backgrounds in social science or humanities. How can this be?
Consider a retail example. The retailer Target was building predictive models to identify likely purchase behavior and to incentivize future behavior with relevant offers. Andrew Pole, a Senior Manager of Media and Database Marketing described how the company went about building systems of predictive models at the Predictive Analytics World Conference in 2010. Pole described the importance of a combination of domain knowledge, knowledge of predictive modeling, and most of all, a forensic mindset in successful modeling of what he calls a “guest portrait.”
They developed a model to predict if a female customer was pregnant. They noticed patterns of purchase behavior, what he called “nesting” behavior. For example, women were purchasing cribs on average 90 days before the due date. Pole also observed that some products were purchased at regular intervals prior to a woman’s due date. The company also observed that if they were able to acquire these women as purchasers of other products during the time before the birth of their baby, Target was able to increase significantly the customer value; these women would continue to purchase from Target after the baby was born based on their purchase behavior before.
The key descriptive terms are “observed” and “noticed.” This means the models were not built as black boxes. The analysts asked, “does this make sense?” and leveraged insights gained from the patterns found in the data to produce better predictive models. It undoubtedly was iterative; as they “noticed” pat- terns, they were prompted to consider other patterns they had not explicitly considered before (and maybe had not even occurred to them before). This forensic mindset of analysts, noticing interesting patterns and making connections between those patterns and how the models could be used, is critical to successful modeling. It is rare that predictive models can be fully defined before a project and modelers can anticipate all of the most important patterns the model will find. So we shouldn’t be surprised that we will be surprised, or put another way, we should expect to be surprised.

This kind of mindset is not learned in a university program; it is part of the personality of the individual. Good predictive modelers need to have a forensic mindset and intellectual curiosity, whether or not they understand the mathematics enough to derive the equations for linear regression.
(This post first appeared in the Predictive Analytics Times)

Friday, July 17, 2015

Data Mining's Forgotten Step-Children

Depending on whose definition one reads, the list of activities which comprise data mining will vary, but the first two items are always the same...


Number 1: Prediction

The most common data mining function, by far, is prediction (or, more esoterically, supervised learning), which is sometimes listed twice, depending on the type of variable being predicted: classification (when the target is categorical) vs. regression (when the target is numerical). Predictive models learned by machines from historical examples easily occupy the most of almost any measure of data mining: time, money, technical papers published, software packages, etc. The hyperbole of marketers and the fears of data mining critics, also, are most often associated with prediction.


Number 2: Clustering

The second most common data mining function in practice is clustering (sometimes known by the alias unsupervised learning). Gathering things into "natural" groupings has a long history in some fields (cladistics in biology, for instance), though clustering's "no right or wrong answer" quality likely will cement its continuing spot in second place.  Despite being second banana to prediction, clustering enjoys widespread application and is well understood even in non-technical circles. What marketer doesn't like a good segmentation?


"... and all the rest!"

What else is in the data mining toolbox? Definitions vary, but the next two most commonly mentioned tasks are anomaly detection and association rule discovery. Other tasks have been included, such as data visualization, though that field dates back well over a hundred years and clearly enjoys a healthy existence outside of the data mining field.

Anomaly detection (a superset of statistical outlier detection) searches for observations which violate patterns in data. Generally, these patterns are discovered (explicitly or not) using prediction or clustering. Given that a wide array of prediction or clustering techniques might be applied, the patterns concluded to exist within a single data set will vary, implying that observations flagged as anomalous will vary. This leaves anomaly detection somewhat in the company of clustering in the sense of having "no right or wrong answers".  Still, anomaly detection can be immensely useful, with two common applications being fraud detection and data cleansing. This author has used a simple anomaly detection process to help find errors in predictive model implementation code.

Association rule discovery attempts to identify patterns among data items which exhibit associations with one another. The classic example is individual items of merchandise in a retail setting (market basket analysis): Each purchase represents an association of a variety of distinct items with one another. After enough purchases, relationships among items can be inferred, such as the frequent purchase of coffee with sugar. Relationships among people, as evidenced by instances of telephone or electronic contact, have also been explored, both for marketing purposes and in law enforcement.


Further Reading

Neither anomaly detection nor association rule discovery receive nearly the press that the first two members of the data mining club do, but it is worth learning something about them. Some problems fall more naturally into their purview. To get started with these techniques, the standard references will do, such as Witten and Frank, or Han and Kamber. Also consider material on outliers in the traditional statistical literature.