Every so often, an article or survey will appear stressing the importance of data preparation as an early step in the process of data mining. One often-overlooked part of data preparation is to clearly define the problem, and, in particular, the target variable. Often, a nominal definition of the target variable is given.
As an example, a common problem in banking is to predict future balances of a loan customer. The current balance is a matter of record and a host of explanatory variables (previous payment history, delinquency history, etc.) are available for model construction. It is easy to move forward with such a project without considering carefully whether the raw target variable is the best choice for the model to approximate. It may be, for instance, that it is easier to predict the logarithm of balance, due to a strongly skewed distribution. Or, it might be that it is easier to predict the ratio of future balances to the current balance. These two alternatives result in models whose output are easily transformed back into the original terms (by exponentiation or multiplication by the current balance, respectively). More sophisticated targets may be designed to stabilize other aspects of the behavior being studied, and certain other loose ends may be cleaned up as well, for instance when the minimum or maximum target values are constrained.
When considering various possible targets, it helps to keep in mind that the idea is to stabilize behavior, so that as many observations as possible align in the solution space. If retail sales include a regular variation, such as by day of the week or month of the year, then that might be a good candidate for normalization: Possibly we want to model retail sales divided by the average for that day of the week, or retail sales divided by a trailing average for that day of the week for the past 4 weeks. Some problems lend themselves to decomposition, such as profit being modeled by predicting revenue and cost separately. One challenge to using multiple models in series this way is that their (presumably independent) errors will compound.
Experience indicates that it is difficult in practice to tell which technique will work best in any given situation without experimenting, but performance gains are potentially quite high for making this sort of effort.
Friday, August 31, 2012
Subscribe to:
Post Comments (Atom)
1 comment:
Feeling dumb... I just noticed that Dean wrote on this very subject in his Apr-05-2012 posting, Why Defining the Target Variable" in Predictive Analytics is Critical (http://abbottanalytics.blogspot.com/2012/04/why-defining-target-variable-in.html).
Post a Comment