[scikit-learn] Optimization algorithms in scikit-learn

Touqir Sajed touqir at ualberta.ca
Tue Sep 4 12:55:54 EDT 2018


Hi,

I have been looking for stochastic optimization algorithms in scikit-learn
that are faster than SGD and so far I have come across Adam and momentum.
Are there other methods implemented in scikit-learn? Particularly, the
variance reduction methods such as SVRG (
https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf
<https://ml-trckr.com/link/https%3A%2F%2Fpapers.nips.cc%2Fpaper%2F4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf/W7SK8K47xGR7dKCC8Wlv>)
? These variance reduction methods are the current state of the art in
terms of convergence speed while maintaining runtime complexity of order n
-- number of features. If they are not implemented yet, I think it would be
really great to implement(I am happy to do so) them since nowadays working
on large datasets(where LBGFS may not be practical) is the norm where the
improvements are definitely worth it.

Cheers,
Touqir

-- 
Computing Science Master's student at University of Alberta, Canada,
specializing in Machine Learning. Website :
https://ca.linkedin.com/in/touqir-sajed-6a95b1126
<https://ml-trckr.com/link/https%3A%2F%2Fca.linkedin.com%2Fin%2Ftouqir-sajed-6a95b1126/W7SK8K47xGR7dKCC8Wlv>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20180904/26b4159e/attachment.html>


More information about the scikit-learn mailing list