Sometimes, the output of analytical tools can be voluminous and complicated. Making sense of it sometimes requires, well, analysis. Following are two examples of applying our tools to their own output.
Model Deployment Verification
From time to time, I have deployed predictive models on a vertical application in the finance industry which is not exactly "user friendly". I have virtually no access to the actual deployment and execution processes, and am largely limited to examination the production mode output, as implemented on the system in question.
As sometimes happens, the model output does not match my original specification. While the actual deployment is not my individual responsibility, it very much helps if I can indicate where the likely problem is. As these models are straightforward linear or generalized linear models (with perhaps a few input data transformations), I have found it useful to calculate the correlation between each of the input variables and the difference between the deployed model output and my own calculated model output. The logic is that input variables with a higher correlation with the deployment error are more likely to be calculated incorrectly. While this trick is not a cure-all, it quickly identifies in 80% or more of cases the culprit data elements.
Model Stability Over Time
A bedrock premise of all analytical work is that the future will resemble the past. After all, if the rules of the game keep changing, then there's little point in learning them. Specifically in predictive modeling, this premise requires that the relationship between input and output variables must remain sufficiently stable for discovered models to continue to be useful in the future.
In a recent analysis, I discovered that models universally exhibited a substantial drop in test performance, when comparing out-of-time to (in-time) out-of-sample. The relationships between at least some of my candidate input variables and the target variable are presumably changing over time. In an effort to minimize this issue, I attempted to determine which variables were most susceptible. I calculated the correlation between each candidate predictor and the target, both for an early time-frame and for a later one.
My thinking was that variables whose correlation changed the most across time were the least stable and should be avoided. Note that I was looking for changes in correlation, and not whether correlations were strong or weak. Also, I regarded strengthening correlations just as suspect as weakening ones: The idea is for the model to perform consistently over time.
In the end, avoiding the use of variables which exhibited "correlation slide" did weaken model performance, but did ensure that performance did not deteriorate so drastically out-of-time.
Final Thought
It is interesting to see how useful analytical tools can be when applied to the analytical process itself. I note that solutions like the ones described here need not use fancy tools: Often simple calculations of means, standard deviation and correlations are sufficient.
Tuesday, March 29, 2011
Sunday, March 06, 2011
Statistics: The Need for Integration
I'd like to revisit an issue we covered here, way back in 2007: Statistics: Why Do So Many Hate It?. Recent comments made to me, both in private conversation ("Statistics? I hated that class in college!"), and in print prompt me to reconsider this issue.
One thing which occurs to me is that many people have a tendency to think of statistics in an isolated way. This world view keeps statistics at bay, as something which is done separately from other business activities, and, importantly, which is done and understood only by the statisticians. This is very far from the ideal which I suggest, in which statistics (including data mining) are much more integrated with the business processes of which they are a part.
In my opinion, this is a strange way to frame statistics. As an analog, imagine if, when asked to produce a report, a business team turned to their "English guy", with the expectation that he did all the writing. I am not suggesting that everyone needs to do the heavy lifting that data miners do, but that people who don't accept some responsibility for data mining's contribution to the business process. Managers, for example, who throw up their hands with the excuse that "they are not numbers people" forfeit control over an important part of their business function. It is healthier for everyone involved, I submit, if statistics moves away from being a black art, and statisticians become less of an arcane priesthood.
One thing which occurs to me is that many people have a tendency to think of statistics in an isolated way. This world view keeps statistics at bay, as something which is done separately from other business activities, and, importantly, which is done and understood only by the statisticians. This is very far from the ideal which I suggest, in which statistics (including data mining) are much more integrated with the business processes of which they are a part.
In my opinion, this is a strange way to frame statistics. As an analog, imagine if, when asked to produce a report, a business team turned to their "English guy", with the expectation that he did all the writing. I am not suggesting that everyone needs to do the heavy lifting that data miners do, but that people who don't accept some responsibility for data mining's contribution to the business process. Managers, for example, who throw up their hands with the excuse that "they are not numbers people" forfeit control over an important part of their business function. It is healthier for everyone involved, I submit, if statistics moves away from being a black art, and statisticians become less of an arcane priesthood.
Labels:
business,
data mining,
integration,
organizational,
orgranizations,
statistics
Subscribe to:
Posts (Atom)