[scikit-learn] Does NMF optimise over observed values
Arthur Mensch
arthur.mensch at inria.fr
Sun Aug 28 11:44:43 EDT 2016
Zeros are considered as zeros in the objective function, not as missing
values - - i.e. no mask in the loss function.
Le 28 août 2016 16:58, "Raphael C" <drraph at gmail.com> a écrit :
What I meant was, how is the objective function defined when X is sparse?
Raphael
On Sunday, August 28, 2016, Raphael C <drraph at gmail.com> wrote:
> Reading the docs for http://scikit-learn.org/st
> able/modules/generated/sklearn.decomposition.NMF.html it says
>
> The objective function is:
>
> 0.5 * ||X - WH||_Fro^2
> + alpha * l1_ratio * ||vec(W)||_1
> + alpha * l1_ratio * ||vec(H)||_1
> + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
> + 0.5 * alpha * (1 - l1_ratio) * ||H||_Fro^2
>
> Where:
>
> ||A||_Fro^2 = \sum_{i,j} A_{ij}^2 (Frobenius norm)
> ||vec(A)||_1 = \sum_{i,j} abs(A_{ij}) (Elementwise L1 norm)
>
> This seems to suggest that it is optimising over all values in X even if X is sparse. When using NMF for collaborative filtering we need the objective function to be defined over only the defined elements of X. The remaining elements should effectively be regarded as missing.
>
>
> What is the true objective function NMF is using?
>
>
> Raphael
>
>
_______________________________________________
scikit-learn mailing list
scikit-learn at python.org
https://mail.python.org/mailman/listinfo/scikit-learn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20160828/42c1813e/attachment.html>
More information about the scikit-learn
mailing list