<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p><br>
</p>
<div class="moz-cite-prefix">On 10/24/18 4:11 AM, Manuel Castejón
Limas wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAK+G6sHDg9+7=0TXaWkycjVcwtfVtsjE-CPu7P2Z36x-6Wejzw@mail.gmail.com">
<div dir="ltr">
<div dir="ltr">Dear all,
<div>as a way of improving the documentation of PipeGraph we
intend to provide more examples of its usage. It was a
popular demand to show application cases to motivate its
usage, so here it is a very simple case with two steps: a
KMeans followed by a LDA. </div>
<div><br>
</div>
<div><a
href="https://mcasl.github.io/PipeGraph/auto_examples/plot_Finding_Number_of_clusters.html#sphx-glr-auto-examples-plot-finding-number-of-clusters-py">https://mcasl.github.io/PipeGraph/auto_examples/plot_Finding_Number_of_clusters.html#sphx-glr-auto-examples-plot-finding-number-of-clusters-py</a><br>
</div>
<div><br>
</div>
<div>This short example points out the following challenges:</div>
<div>- KMeans is not a transformer but an estimator</div>
</div>
</div>
</blockquote>
<p>KMeans is a transformer in sklearn:
<a class="moz-txt-link-freetext" href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.transform">http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.transform</a></p>
<p>(you can't get the labels to be the output which is what you're
doing here, but it is a transformer)<br>
</p>
<blockquote type="cite"
cite="mid:CAK+G6sHDg9+7=0TXaWkycjVcwtfVtsjE-CPu7P2Z36x-6Wejzw@mail.gmail.com">
<div dir="ltr">
<div dir="ltr">
<div>- LDA score function requires the y parameter, while its
input does not come from a known set of labels, but from the
previous KMeans<br>
</div>
<div>- Moreover, the GridSearchCV.fit call would also
require a 'y' parameter</div>
</div>
</div>
</blockquote>
<p>Not true if you provide a scoring that doesn't require y or if
you don't specify scoring and the scoring method of the estimator
doesn't require y.</p>
<p>GridSearchCV.fit doesn't require y.<br>
</p>
<blockquote type="cite"
cite="mid:CAK+G6sHDg9+7=0TXaWkycjVcwtfVtsjE-CPu7P2Z36x-6Wejzw@mail.gmail.com">
<div dir="ltr">
<div dir="ltr">
<div>- It would be nice to have access to the output of the
KMeans step as well.</div>
<div><br>
</div>
<div>PipeGraph is capable of addressing these challenges.</div>
<div><br>
</div>
<div>The rationale for this example lies in the
identification-reconstruction realm. In a scenario where the
class labels are unknown, we might want to associate the
quality of the clustering structure to the capability of a
later model to be able to reconstruct this structure. So the
basic idea here is that if LDA is capable of getting good
results it was because the information of the KMeans was
good enough for that purpose, hinting the discovery of a
good structure.</div>
<div><br>
</div>
</div>
</div>
</blockquote>
<p>Can you provide a citation for that? That seems to heavily depend
on the clustering algorithms and the classifier.<br>
To me, stability scoring seems more natural:
<a class="moz-txt-link-freetext" href="https://arxiv.org/abs/1007.1075">https://arxiv.org/abs/1007.1075</a></p>
<p>This does seem interesting as well, though, haven't thought about
this.</p>
<p>It's cool that this is possible, but I feel this is still not
really a "killer application" in that this is not a very common
pattern.</p>
<p>Also you could replicate something similar in sklearn with</p>
<p>def estimator_scorer(testing_estimator):<br>
def my_scorer(estimator, X, y=None):<br>
y = estimator.predict(X)<br>
</p>
<p> return np.mean(cross_val_score(testing_estimator, X, y))</p>
<p>Though using that we'd be doing nested cross-validation on the
test set...<br>
That's a bit of an issue in the current GridSearchCV
implementation :-/ There's an issue by Joel somewhere<br>
to implement something that allows training without splitting
which is what you'd want here.<br>
You could run the outer grid-search with a custom cross-validation
iterator that returns all indices as training and test set and
only does a single split, though...</p>
<p>class NoSplitCV(object):</p>
<p> def split(self, X, y, class_weights):</p>
<p> indices <span class="pl-k">=</span>
np.arange(_num_samples(X))<br>
yield indices, indices</p>
<p>Though I acknowledge that your code only takes 4 lines, while
mine takes 8 (thought if we'd add NoSplitCV to sklearn mine would
also only take 4 lines :P)<br>
</p>
<p>I think pipegraph is cool, not meaning to give you a hard time ;)<br>
</p>
</body>
</html>