jbbrown at kuhp.kyoto-u.ac.jp
Wed Nov 20 00:24:40 EST 2019
Your request to do performance checking of the steps of SVM-RFE is a pretty
Since the contributors to scikit-learn have done great to make the
interface to RFE easy to use, the only real work required from you would be
to build a small wrapper function that:
(a) computes the step sizes you want to output prediction performances for,
(b) loops over the step sizes, making each step size the n_features
attribute of RFE (and built from the remaining features), making
predictions from a SVM retrained (and possibly optimized) on the reduced
feature set, and then outputting your metric(s) appropriate to your problem.
Tracing the feature weights is then done by accessing the "coef_" attribute
of the linear SVM trained.
This can be output in loop step (b) as well.
where each time 10% for the features are removed.
> How one can get the accuracy overall the levels of the elimination stages.
> For example, I want to get performance over 1000 features, 900 features,
> 800 features,....,2 features, 1 feature.
Just a technicality, but by 10% reduction you would have
1000, 900, 810, 729, 656, ... .
Either way, if you allow your wrapper function to take a pre-computed list
of feature sizes, you can flexibly change between a systematic way or a
context-informed way of specifying feature sizes (and resulting weights) to
Hope this helps.
Kyoto University Graduate School of Medicine
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the scikit-learn