Tuesday, October 23, 2012

Data Preparation: Know Your Records!

Data preparation in data mining and predictive analytics (dare I also say Data Science?) rightfully focuses on how the fields in ones data should be represented so that modeling algorithms either will work properly or at least won't be misled by the data. These data preprocessing steps may involve filling missing values, reigning in the effects of outliers, transforming fields so they better comply with algorithm assumptions, binning, and much more.

In recent weeks I've been reminded how important it is to know your records. I've heard this described in many ways, four of which are:

  • the unit of analysis
  • the level of aggregation
  • what a record represents
  • unique description of a record


  • For example, does each record represent a customer? If so, over their entire history or over a time period of interest? In web analytics, the time period of interest may be a single session, which if it is true, means that an individual customer may be in the modeling data multiple times as if each visit or session is an independent event.

    Where this especially matters is when disparate data sources are combined. If one is joining a table of customerID/Session data with another table with each record representing a customerID, there's no problem. But if the second table represents customerID/store visit data, there will obviously be a many-to-many join resulting in a big mess.

    This is probably obvious to most readers of this blog. What isn't always obvious is when our assumptions about the data result in unexpected results. What if we expect the unit of analysis to be customerID/Session but there are duplicates in the data? Or what if we had assumed customerID/Session data but it was in actuality customerID/Day data (where ones customers typically have one session per day, but could have a dozen)?

    The answer is just like we need to perform a data audit to identify potential problems with fields in the data, we need to perform record audits to uncover unexpected record-level anomalies. We've all had those data sources where the DBA swears up and down that there are no dups in the data, but when we group by customerID/Session, we find 1000 dups.

    So before the joins and after joins, we need to do those group by operations to find examples with unexpected numbers of matches.

    In conclusion: know what your records are supposed to represent, and verify verify verify. Otherwise, your models (who have no common sense) will exploit these issues in undesirable ways!

    3 comments:

    hispekshop24 said...

    Nice article. Keep up the good work.

    ThomasWilson said...

    Great find fascinating stuff -- thanks for posting about this.

    NeilHoston said...

    Thanks you very much for sharing such an informative post...It is very true that fields in ones data should be represented correctly to make models work rightly..