Thursday, September 13, 2012

Budgeting Time on a Modeling Project

Within the time allotted for any empirical modeling project, the analyst must decide how to allocate time for various aspects of the process.  As is the case with any finite resource, more time spent on this means less time spent on that.  I suspect that many modelers enjoy the actual modeling part of the job most.  It is easy to try "one more" algorithm: Already tried logistic regression and a neural network?  Try CART next.

Of course, more time spent on the modeling part of this means less time spent on other things.  An important consideration for optimizing model performance, then, is: Which tasks deserve more time, and which less?

Experimenting with modeling algorithms at the end of a project will no doubt produce some improvements, and it is not argued here that such efforts be dropped.  However, work done earlier in the project establishes an upper limit on model performance.  I suggest emphasizing data clean-up (especially missing value imputation) and creative design of new features (ratios of raw features, etc.) as being much more likely to make the model's job easier and produce better performance.

Consider how difficult it is for a simple 2-input model to discern "healthy" versus "unhealthy" when provided the input variables height and weight alone.  Such a model must establish a dividing line between healthy and unhealthy weights separately for each height.  When the analyst uses instead the ratio of weight to height, this becomes much simpler.  Note that the commonly used BMI (body mass index) is slightly more complicated than this, and would likely perform even better.  Crossing categorical variables is another way to simplify the problem for the model.  Though we deal with a process we call "machine learning", is is a pragmatic matter to make the job as easy as possible for the machine.

The same is true for handling missing values.  Simple global substitution using the non-missing mean or median is a start, but think about the spike that creates in the variable's distribution.  Doing this over multiple variables creates a number of strange artifacts in the multivariate distribution.  Spending the time and energy to fill in those missing values in a smarter way (possibly by building a small model) cleans up the data dramatically for the downstream modeling process.

Tuesday, September 11, 2012

What do we call what we do?

I've called myself a data miner for about 15 years, and the field I was a part of as Data Mining (DM). Before then, I referred to what I did as "Pattern Recognition", "Machine Learning", "Statistical Modeling", or "Statistical Learning". In recent years, I've called what I do Predictive Analytics (PA) more often and even co-titled my blog with both Data Mining and Predictive Analytics. That stated, I don't have a good noun to go along with PA. A "predictive analytist" (as if I myself were a "predictor")? A "predictive analyzer"? I often call someone who does PA a Predictive Analytics Professional. But the according to google, the trending on data mining is down. Pattern recognition? Down. Machine Learning? Flat or slightly up. Only Predictive Analytics and it's closely-related sibling, Business Analytics, are up. Even the much-touted Data Science has been relatively flat, though has been spiking Q4 the past few years.
data mining
Data Mining
Pattern Recognition
Machine Learning
Predictive Analytics
Business Analytics
The big winner? Big Data of course! It has exploded this year. Will that trend continue? It's hard to believe it will continue, but this wave has grown and it seems that every conference related to analytics or databases is touting "big data".

Big Data

Data Science

I have no plans of calling what I do "big data" or "data science". The former term will pass when data gets bigger than big data. The latter may or may not stick, but seems to resonate more with theoreticians and leading-edge types than with practitioners. For now, I'll continue to call myself a data miner and what I do predictive analytics or data mining.