[scikit-learn] Does NMF optimise over observed values

Raphael C drraph at gmail.com
Sun Aug 28 10:57:44 EDT 2016


What I meant was, how is the objective function defined when X is sparse?

Raphael

On Sunday, August 28, 2016, Raphael C <drraph at gmail.com> wrote:

> Reading the docs for http://scikit-learn.org/stable/modules/generated/
> sklearn.decomposition.NMF.html it says
>
> The objective function is:
>
> 0.5 * ||X - WH||_Fro^2
> + alpha * l1_ratio * ||vec(W)||_1
> + alpha * l1_ratio * ||vec(H)||_1
> + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
> + 0.5 * alpha * (1 - l1_ratio) * ||H||_Fro^2
>
> Where:
>
> ||A||_Fro^2 = \sum_{i,j} A_{ij}^2 (Frobenius norm)
> ||vec(A)||_1 = \sum_{i,j} abs(A_{ij}) (Elementwise L1 norm)
>
> This seems to suggest that it is optimising over all values in X even if X is sparse.   When using NMF for collaborative filtering we need the objective function to be defined over only the defined elements of X. The remaining elements should effectively be regarded as missing.
>
>
> What is the true objective function NMF is using?
>
>
> Raphael
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20160828/87bfda8d/attachment.html>


More information about the scikit-learn mailing list