[scikit-learn] How to keep a model running in memory?
Roland Hochmuth
rhochmuth at alteryx.com
Thu Dec 20 11:12:18 EST 2018
Hi Liam, Not sure I have the complete context for what you are trying to do, but have you considered using Python multiprocessing to start a separate process? The lifecycle of that process could start when the Flask server starts-up or on the first request. The separate process would load and run the model. Depending on what you would like to do, some form of IPC mechanism, such as gRPC could be used to control or get updates from the model process.
Regards --Roland
From: scikit-learn <scikit-learn-bounces+rhochmuth=alteryx.com at python.org> on behalf of Aneto <aneto at chatdesk.com>
Reply-To: Scikit-learn mailing list <scikit-learn at python.org>
Date: Thursday, December 20, 2018 at 8:21 AM
To: "scikit-learn at python.org" <scikit-learn at python.org>
Cc: Liam Geron <liam at chatdesk.com>
Subject: [scikit-learn] How to keep a model running in memory?
Hi scikit learn community,
We currently use scikit-learn for a model that generates predictions on a server endpoint. We would like to keep the model running in memory instead of having to re-load the model for every new request that comes in to the server.
Can you please point us in the right direction for this? Any tutorials or examples.
In case it's helpful, we use Flask for our web server.
Thank you!
Aneto
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20181220/e0c7e676/attachment.html>
More information about the scikit-learn
mailing list