tag:blogger.com,1999:blog-5652924.post116294756788282247..comments2024-03-02T01:02:21.655-08:00Comments on Applied Data Science and <br>Machine Learning: Family Recipe For Neural NetworksDean Abbotthttp://www.blogger.com/profile/16818000233889520746noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-5652924.post-64086847077035421742007-04-29T18:14:00.000-07:002007-04-29T18:14:00.000-07:00I divide the data before training into "train" and...I divide the data before training into "train" and "test" sets.Will Dwinnellhttps://www.blogger.com/profile/03379859054257561952noreply@blogger.comtag:blogger.com,1999:blog-5652924.post-1164532515043292712006-11-26T01:15:00.000-08:002006-11-26T01:15:00.000-08:00Regarding stopping early:How do you ensure satisfa...Regarding stopping early:<BR/>How do you ensure satisfactory results when you stop early? <BR/><BR/>Do you pick a new training/testing set pair and repeat the finetuning again ad infinitum until the error is small enough?<BR/><BR/>I've just recently begun using ANN, just for fun, and I've found that if I stop as the test set values begin to deteriorate, the output from the nets will be way too Alkanenhttps://www.blogger.com/profile/07024145115085466437noreply@blogger.comtag:blogger.com,1999:blog-5652924.post-1163824455574917552006-11-17T20:34:00.000-08:002006-11-17T20:34:00.000-08:00The scaling tip is interesting. About 15 years ago...The scaling tip is interesting. About 15 years ago I used to do the same with neural networks--take classification problems and rather than having the net predict "0" and "1", I'd change the outputs to "0.05" and "0.95" or so and the network converged faster. Why? I also suspected it was two things. First, the "0" and "1" values were at the fringe of the tails of the sigmoid where the derivative Dean Abbotthttps://www.blogger.com/profile/16818000233889520746noreply@blogger.comtag:blogger.com,1999:blog-5652924.post-1163017629698493812006-11-08T12:27:00.000-08:002006-11-08T12:27:00.000-08:00So many good "rules of thumb" here (I'll contact y...So many good "rules of thumb" here (I'll contact you directly about using some of them in my data mining course!)<BR/><BR/>I just got back from the SPSS Directions conference (I'll post on that in a day or two) and heard several times the usual mantra about neural networks beging too difficult to use, or too much of a "black box". I disagree and will post why I believe that is the case soon as Dean Abbotthttps://www.blogger.com/profile/16818000233889520746noreply@blogger.comtag:blogger.com,1999:blog-5652924.post-1162972046965495122006-11-07T23:47:00.000-08:002006-11-07T23:47:00.000-08:00Another good reference to make ANN work is the boo...Another good reference to make ANN work is the book by Orr and Müller: <A HREF="http://www.amazon.com/Neural-Networks-Richard-K-Miller/dp/3540653112/sr=8-1/qid=1162971495/ref=pd_bbs_sr_1/002-3343124-2088061?ie=UTF8&s=books" REL="nofollow">Neural Networks: Tricks of the Trade</A> (1999)Anonymousnoreply@blogger.com