[scikit-learn] Optimization algorithms in scikit-learn
t3kcit at gmail.com
Tue Sep 4 13:44:31 EDT 2018
We don't usually implement general purpose optimizers in
scikit-learn, in particular because usually different optimizers
apply to different kinds of problems.
For linear models we have SAG and SAGA, for neural nets we have adam.
I don't think the authors claim to be faster than SAG, so I'm not sure
motivation would be for using their method.
On 09/04/2018 12:55 PM, Touqir Sajed wrote:
> I have been looking for stochastic optimization algorithms in
> scikit-learn that are faster than SGD and so far I have come across
> Adam and momentum. Are there other methods implemented in
> scikit-learn? Particularly, the variance reduction methods such as
> ? These variance reduction methods are the current state of the art in
> terms of convergence speed while maintaining runtime complexity of
> order n -- number of features. If they are not implemented yet, I
> think it would be really great to implement(I am happy to do so) them
> since nowadays working on large datasets(where LBGFS may not be
> practical) is the norm where the improvements are definitely worth it.
> Computing Science Master's student at University of Alberta, Canada,
> specializing in Machine Learning. Website :
> scikit-learn mailing list
> scikit-learn at python.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the scikit-learn