In recent weeks I've been reminded how important it is to know your records. I've heard this described in many ways, four of which are:
For example, does each record represent a customer? If so, over their entire history or over a time period of interest? In web analytics, the time period of interest may be a single session, which if it is true, means that an individual customer may be in the modeling data multiple times as if each visit or session is an independent event.
Where this especially matters is when disparate data sources are combined. If one is joining a table of customerID/Session data with another table with each record representing a customerID, there's no problem. But if the second table represents customerID/store visit data, there will obviously be a many-to-many join resulting in a big mess.
This is probably obvious to most readers of this blog. What isn't always obvious is when our assumptions about the data result in unexpected results. What if we expect the unit of analysis to be customerID/Session but there are duplicates in the data? Or what if we had assumed customerID/Session data but it was in actuality customerID/Day data (where ones customers typically have one session per day, but could have a dozen)?
The answer is just like we need to perform a data audit to identify potential problems with fields in the data, we need to perform record audits to uncover unexpected record-level anomalies. We've all had those data sources where the DBA swears up and down that there are no dups in the data, but when we group by customerID/Session, we find 1000 dups.
So before the joins and after joins, we need to do those group by operations to find examples with unexpected numbers of matches.
In conclusion: know what your records are supposed to represent, and verify verify verify. Otherwise, your models (who have no common sense) will exploit these issues in undesirable ways!