[scikit-learn] caching transformers during hyper parameter optimization
Georg Heiler
georg.kf.heiler at gmail.com
Wed Aug 16 07:28:21 EDT 2017
There is a new option in the pipeline:
http://scikit-learn.org/stable/modules/pipeline.html#pipeline-cache
How can I use this to also store the transformed data as I only want to
compute the last step i.e. estimator during hyper parameter tuning and not
the transform methods of the clean steps?
Is there a possibility to apply this for crossvalidation? I would want to
see all the folds precomputed and stored to disk in a folder.
Regards,
Georg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20170816/1ceb0a9c/attachment.html>
More information about the scikit-learn
mailing list