Tuesday, March 29, 2011

Analyzing the Results of Analysis

Sometimes, the output of analytical tools can be voluminous and complicated. Making sense of it sometimes requires, well, analysis. Following are two examples of applying our tools to their own output.


Model Deployment Verification

From time to time, I have deployed predictive models on a vertical application in the finance industry which is not exactly "user friendly". I have virtually no access to the actual deployment and execution processes, and am largely limited to examination the production mode output, as implemented on the system in question.

As sometimes happens, the model output does not match my original specification. While the actual deployment is not my individual responsibility, it very much helps if I can indicate where the likely problem is. As these models are straightforward linear or generalized linear models (with perhaps a few input data transformations), I have found it useful to calculate the correlation between each of the input variables and the difference between the deployed model output and my own calculated model output. The logic is that input variables with a higher correlation with the deployment error are more likely to be calculated incorrectly. While this trick is not a cure-all, it quickly identifies in 80% or more of cases the culprit data elements.


Model Stability Over Time

A bedrock premise of all analytical work is that the future will resemble the past. After all, if the rules of the game keep changing, then there's little point in learning them. Specifically in predictive modeling, this premise requires that the relationship between input and output variables must remain sufficiently stable for discovered models to continue to be useful in the future.

In a recent analysis, I discovered that models universally exhibited a substantial drop in test performance, when comparing out-of-time to (in-time) out-of-sample. The relationships between at least some of my candidate input variables and the target variable are presumably changing over time. In an effort to minimize this issue, I attempted to determine which variables were most susceptible. I calculated the correlation between each candidate predictor and the target, both for an early time-frame and for a later one.

My thinking was that variables whose correlation changed the most across time were the least stable and should be avoided. Note that I was looking for changes in correlation, and not whether correlations were strong or weak. Also, I regarded strengthening correlations just as suspect as weakening ones: The idea is for the model to perform consistently over time.

In the end, avoiding the use of variables which exhibited "correlation slide" did weaken model performance, but did ensure that performance did not deteriorate so drastically out-of-time.


Final Thought

It is interesting to see how useful analytical tools can be when applied to the analytical process itself. I note that solutions like the ones described here need not use fancy tools: Often simple calculations of means, standard deviation and correlations are sufficient.

2 comments:

Tim Manns said...

RE
" I calculated the correlation between each candidate predictor and the target, both for an early time-frame and for a later one.
My thinking was that variables whose correlation changed the most across time were the least stable and should be avoided."

>> I think that is a brilliant idea. I'm going to steal it :)
Additionally I'm thinking it might be very useful to identify the main variables that affect seasonality when you don't have the full history that covers the seasonality cycle (where your historical data is not large enough to properly indentify the seasonality cycle).

Jožo Kováč said...

Why those unstable predictors are usually the most important too? :o)