In Idempotent-capable Predictors, the Jul-06-2007 posting to Machine Learning (Theory) Web log, the author suggests the importance of empirical models being idempotent (in this case, meaning that they can use one of the input variables as the model output).
This is of interest since: 1. One would like to believe that the modeling process could generate the right answer, once it had actually been given the right answer, and 2. It is not uncommon for analysts to design inputs to models which give "hints" (which are partial solutions of the problem). In the article mentioned above, it is noted that some typical modeling algorithms, such as logistic regression, are not idempotent capable. The author wonders how important this property is, and I do, too. Thoughts?
Saturday, July 21, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment