<div dir="ltr">Howdy<div><br></div><div>This documentation seems to be split between the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier">RandomForestClassifier</a> documentation that discusses sampling with replacement and the <a href="http://scikit-learn.org/stable/modules/ensemble.html">Ensemble</a> documentation that discusses the splits. I agree that it could be made more explicit. If you want to build off of those to make sure that both discuss both features, I would be happy to review it.</div><div><br></div><div>Jacob</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 5, 2017 at 10:02 PM, Brown J.B. <span dir="ltr"><<a href="mailto:jbbrown@kuhp.kyoto-u.ac.jp" target="_blank">jbbrown@kuhp.kyoto-u.ac.jp</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div><div><div><div>Dear Jacob,<br><br></div>Thank you for this clarification. It is a great help in interpreting the (good) results that we are obtaining for computational chemogenomics, and also help in deciding directions of future studies.<br><br></div>Perhaps then, the random forest documentation (description web page) could be updated to reflect our discussion, in that it might help others who have the same questions of interpretation.<br><br></div>Perhaps, we can add the following (with notation symbols corrected to match sklearn standards):<br>----------<br></div>In general, for a modeling problem with N training instances each having F features, a random forest of T trees operates by building T decision trees such that each tree is provided a subsampling of N instances and the F features for those subsampled instances. <br><br>When bootstrapping is applied, the instance subsampling can potentially choose the same instance multiple times, in which such an instance will have elevated weighting.<br></div>When bootstrapping is not applied, the entire training set is provided to the tree-building algorithm.<br><br></div>Each tree is built by considering a maximum specified number of informative features at each decision node, such that features with no variance are excluded from the features to consider for a split and do not count toward the number of informative features.<br></div>Splitting on the informative features can occur as many times as necessary, unless a maximum depth is specified in the constructor.<br></div><div>Note that an informative feature can be re-applied to form a decision criteria at more than node in the decision tree.<br>----------<br></div><br></div>Adjustments welcome.<br><br></div>Many thanks again!<span class="HOEnZb"><font color="#888888"><br></font></span></div><span class="HOEnZb"><font color="#888888">J.B.</font></span><div><div class="h5"><br><div><div><div><div><div><div><div><div><div><br><div><div><div><div><div><div><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2017-06-06 2:54 GMT+09:00 Jacob Schreiber <span dir="ltr"><<a href="mailto:jmschreiber91@gmail.com" target="_blank">jmschreiber91@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Howdy<div><br></div><div>When doing bootstrapping, n samples are selected from the dataset WITH replacement, where n is the number of samples in the dataset. This leads to situations where some samples have a weight > 1 and others have a weight of 0. This is done separately for each tree.</div><div><br></div><div>When selecting the number of features, this should be considered more like `max_informative_features.` Essentially, if a tree considers splitting a feature that is constant, that won't count against the `max_features` threshold that is set. This helps guard against situations where many uninformative trees are built because the dataset is full of uninformative features. You can see this in the code here: (<a href="https://github.com/scikit-learn/scikit-learn/blob/14031f65d144e3966113d3daec836e443c6d7a5b/sklearn/tree/_splitter.pyx#L361" target="_blank">https://github.com/scikit-lea<wbr>rn/scikit-learn/blob/14031f65d<wbr>144e3966113d3daec836e443c6d7a5<wbr>b/sklearn/tree/_splitter.pyx#<wbr>L361</a>). This is done on a -per split basis-, meaning that a tree can have more than `max_features` number of features considered. </div><div><br></div><div>In your example, it is not that there would be at most 20 splits in a tree, it is that at each split only 20 informative features would be considered. You can split on a feature multiple times (consider the example where you have one features and x < 0 is class 0, 0 <= x <= 10 is class 1, and x > 10 is class 0 again). </div><div><br></div><div>Let me know if you have any other questions!</div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_3890393324960812331gmail-h5">On Mon, Jun 5, 2017 at 7:46 AM, Brown J.B. <span dir="ltr"><<a href="mailto:jbbrown@kuhp.kyoto-u.ac.jp" target="_blank">jbbrown@kuhp.kyoto-u.ac.jp</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="m_3890393324960812331gmail-h5"><div dir="ltr"><div><div>Dear community,<br><br></div>This is a question regarding how to interpret the documentation and semantics of the random forest constructors.<br><br></div><div>In forest.py (of version 0.17 which I am still using), the documentation regarding the number of features to consider states on lines 742-745 of the source code that the search may effectively inspect more than `max_features` when determining the features to pick from in order to split a node.<br></div><div>It also states that it is tree specific.<br><br></div><div>Am I correct in:<br><br></div><div>Interpretation #1 - For bootstrap=True, sampling with replacement occurs for the number of training instances available, meaning that the subsample presented to a particular tree will have some probability of containing overlaps and therefore not the full input training set, but for bootstrap=False, the entire dataset will be presented to each tree?<br><br></div><div>Interpretation #2 - Particularly, with the way I interpret the documentation stating that "The sub-sample size is always the same as the original
input sample size...", it seems to me that bootstrap=False then provides the entire training dataset to each decision tree, and it is a matter of which feature was randomly selected first from the features given that determines what the tree will become.<br></div><div>That would suggest that, if bootstrap=False, and if the number of trees is high but the feature dimensionality is very low, then there is a high possibility that multiple copies of the same tree will emerge from the forest.<br><br>Interpretation #3 - the feature subset is not subsampled per tree, but
rather all features are presented for the subsampled training data
provided to a tree ? For example, if the dimensionality is 400 on a
6000-input training dataset that has randomly been subsampled (with bootstrap=True) to yield 4700 unique training samples,
then the tree builder will consider all 400 dimensions/features with
respect to the 4700 samples, picking at most `max_features` number of
features (out of 400) for building splits in the tree? So by default
(sqrt/auto), there would be at most 20 splits in the tree?<br><br></div><div>Confirmations, denials, and corrections to my interpretations are _highly_ welcome.<br></div><div><br></div><div>As always, my great thanks to the community.<br></div><div><br></div><div>With kind regards,<br></div><div>J.B. Brown<br></div><div>Kyoto University Graduate School of Medicine<br></div></div>
<br></div></div>______________________________<wbr>_________________<br>
scikit-learn mailing list<br>
<a href="mailto:scikit-learn@python.org" target="_blank">scikit-learn@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/scikit-learn" rel="noreferrer" target="_blank">https://mail.python.org/mailma<wbr>n/listinfo/scikit-learn</a><br>
<br></blockquote></div><br></div>
<br>______________________________<wbr>_________________<br>
scikit-learn mailing list<br>
<a href="mailto:scikit-learn@python.org" target="_blank">scikit-learn@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/scikit-learn" rel="noreferrer" target="_blank">https://mail.python.org/mailma<wbr>n/listinfo/scikit-learn</a><br>
<br></blockquote></div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
<br>______________________________<wbr>_________________<br>
scikit-learn mailing list<br>
<a href="mailto:scikit-learn@python.org">scikit-learn@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/scikit-learn" rel="noreferrer" target="_blank">https://mail.python.org/<wbr>mailman/listinfo/scikit-learn</a><br>
<br></blockquote></div><br></div>