Abstract: You have to see it to believe it! Imagine a technique where you randomly delete as many as 80% of your observations in the training set, without decreasing the predictive power (actually improving it in many cases), and reducing computing time by an order of magnitude. In its simplest version, that’s what stochastic thinning does. Here, performance improvement is measured outside the training set, on the validation set also called test data. I illustrate this method on a real-life dataset, in the context of regression and neural networks. In the latter, it speeds up the training stage by a noticeable factor. The thinning process applies to the training set, and may involve multiple tiny random subsets called fractional training sets, representing less than 20% of the training data when combined together. It can also be used for data compression, or to measure the strength of a machine learning algorithm.
I also show the potential limitations of the new technique, and introduce the concepts of leading or influential observations (those kept for learning purposes) and followers (observations dropped from the training set). The word “influential observations” should not be confused with its usage in statistics, although in both cases it leads to explainable AI. The neural network used in this article offers replicable results by controlling all the sources of randomness, a property rarely satisfied in other implementations.
Bio: Bio Coming Soon!