Every so often, an article or survey will appear stressing the importance of data preparation as an early step in the process of data mining. One often-overlooked part of data preparation is to clearly define the problem, and, in particular, the target variable. Often, a nominal definition of the target variable is given.
As an example, a common problem in banking is to predict future balances of a loan customer. The current balance is a matter of record and a host of explanatory variables (previous payment history, delinquency history, etc.) are available for model construction. It is easy to move forward with such a project without considering carefully whether the raw target variable is the best choice for the model to approximate. It may be, for instance, that it is easier to predict the logarithm of balance, due to a strongly skewed distribution. Or, it might be that it is easier to predict the ratio of future balances to the current balance. These two alternatives result in models whose output are easily transformed back into the original terms (by exponentiation or multiplication by the current balance, respectively). More sophisticated targets may be designed to stabilize other aspects of the behavior being studied, and certain other loose ends may be cleaned up as well, for instance when the minimum or maximum target values are constrained.
When considering various possible targets, it helps to keep in mind that the idea is to stabilize behavior, so that as many observations as possible align in the solution space. If retail sales include a regular variation, such as by day of the week or month of the year, then that might be a good candidate for normalization: Possibly we want to model retail sales divided by the average for that day of the week, or retail sales divided by a trailing average for that day of the week for the past 4 weeks. Some problems lend themselves to decomposition, such as profit being modeled by predicting revenue and cost separately. One challenge to using multiple models in series this way is that their (presumably independent) errors will compound.
Experience indicates that it is difficult in practice to tell which technique will work best in any given situation without experimenting, but performance gains are potentially quite high for making this sort of effort.
Friday, August 31, 2012
Wednesday, August 08, 2012
The Data is Free and Computing is Cheap, but Imagination is Dear
Recently published research, What Makes Paris Look like Paris?, attempts to classify images of street scenes according to their city of origin. This is a fairly typical supervised machine learning project, but the source of the data is of interest. The authors obtained a large number of Google Street View images, along with the names of the cities they came from. Increasingly, large volumes of interesting data are being made available via the Internet, free of charge or at little cost. Indeed, I published an article about classifying individual pixels within images as "foliage" or "not foliage", using information I obtained using on-line searches for things like "grass", "leaves", "forest" and so forth.
A bewildering array of data have been put on the Internet. Much of this data is what you'd expect: financial quotes, government statistics, weather measurements and the like- large tables of numeric information. However, there is a great deal of other information: 24/7 Web cam feeds which are live for years, news reports, social media spew and so on. Additionally, much of the data for which people once charged serious bucks is now free or rather inexpensive. Already, many firms augment the data they've paid for with free databases on the Web. An enormous opportunity is opening up for creative data miners to consume and profit from large, often non-traditional, non-numeric data which are freely available to all, but (so far) creatively analyzed by few.
A bewildering array of data have been put on the Internet. Much of this data is what you'd expect: financial quotes, government statistics, weather measurements and the like- large tables of numeric information. However, there is a great deal of other information: 24/7 Web cam feeds which are live for years, news reports, social media spew and so on. Additionally, much of the data for which people once charged serious bucks is now free or rather inexpensive. Already, many firms augment the data they've paid for with free databases on the Web. An enormous opportunity is opening up for creative data miners to consume and profit from large, often non-traditional, non-numeric data which are freely available to all, but (so far) creatively analyzed by few.
Subscribe to:
Posts (Atom)