*this*means less time spent on

*that*. I suspect that many modelers enjoy the actual modeling part of the job most. It is easy to try "one more" algorithm: Already tried logistic regression and a neural network? Try CART next.

Of course, more time spent on the modeling part of this means less time spent on other things. An important consideration for optimizing model performance, then, is: Which tasks deserve more time, and which less?

Experimenting with modeling algorithms at the end of a project will no doubt produce

*some*improvements, and it is not argued here that such efforts be dropped. However, work done earlier in the project establishes an upper limit on model performance. I suggest emphasizing data clean-up (especially missing value imputation) and creative design of new features (ratios of raw features, etc.) as being much more likely to make the model's job easier and produce better performance.

Consider how difficult it is for a simple 2-input model to discern "healthy" versus "unhealthy" when provided the input variables

*height*and

*weight*alone. Such a model must establish a dividing line between healthy and unhealthy weights separately for each height. When the analyst uses instead the ratio of weight to height, this becomes much simpler. Note that the commonly used BMI (body mass index) is slightly more complicated than this, and would likely perform even better. Crossing categorical variables is another way to simplify the problem for the model. Though we deal with a process we call "machine learning", is is a pragmatic matter to make the job as easy as possible for the machine.

The same is true for handling missing values. Simple global substitution using the non-missing mean or median is a start, but think about the spike that creates in the variable's distribution. Doing this over multiple variables creates a number of strange artifacts in the multivariate distribution. Spending the time and energy to fill in those missing values in a smarter way (possibly by building a small model) cleans up the data dramatically for the downstream modeling process.

## 3 comments:

Hey, I'm a community blog curator for DZone. I wanted to talk with you about potentially featuring your blog on DZone's content portals. Send me an email at allenc [at] dzone [dot] com and I'll explain the details.

This is an important post. Everyone hears, again and again, that it is critical to spend time on Data Prep, but is rarely made clear what one is up to during that phase. Your very simple Height/Weight example is brilliant and captures how a substantial fraction of Data Prep is spent.

Hello, Thanks For This Great Article. Keep up This Great Sites.

Post a Comment