[scikit-learn] Normalizer, l1 and l2 norms
Guillaume Lemaître
g.lemaitre58 at gmail.com
Tue Sep 24 07:59:25 EDT 2019
Since you are normalizing sample by sample, you don't need information from
the training set to normalize a new sample.
You just need to compute the norm of this new sample.
On Tue, 24 Sep 2019 at 13:41, Sole Galli <solegalli1 at gmail.com> wrote:
> Hello team,
>
> Quick question respect to the Normalizer().
>
> My understanding is that this transformer divides the values (rows) of a
> vector by the vector euclidean (l2) or manhattan distances (l1).
>
> From the sklearn docs, I understand that the Normalizer() does not learn
> the distances from the train set and stores them. It rathers normalises the
> data according to distance the data set presents, which could be or not,
> the same in test and train.
>
> Am I understanding this correctly?
>
> If so, what is the reason not to store these parameters in the Normalizer
> and use them to scale future data?
>
> If not getting it right, what am I missing?
>
> Many thanks and I will appreciate if you have an article on this to share.
>
> Cheers
>
> Sole
>
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
--
Guillaume Lemaitre
Scikit-learn @ Inria Foundation
https://glemaitre.github.io/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20190924/82ad8162/attachment.html>
More information about the scikit-learn
mailing list