<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 12/13/18 4:16 AM, Joris Van den
Bossche wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CALQtMBYGyLKYZLGF=m4hXUWb4-B+6gpuA1CWpwjH8WV9xjFYCw@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div dir="ltr">Hi all,<br>
<br>
I finally had some time to start looking at it the last days.
Some preliminary work can be found here: <a
href="https://github.com/jorisvandenbossche/target-encoder-benchmarks"
moz-do-not-send="true">https://github.com/jorisvandenbossche/target-encoder-benchmarks</a>.<br>
</div>
</div>
</blockquote>
You continue to be my hero. Probably can not look at it in detail
before the holidays though :-/<br>
<blockquote type="cite"
cite="mid:CALQtMBYGyLKYZLGF=m4hXUWb4-B+6gpuA1CWpwjH8WV9xjFYCw@mail.gmail.com">
<div dir="ltr">
<div dir="ltr"><br>
Up to now, I only did some preliminary work to set up the
benchmarks (based on Patricio Cerda's code, <a
href="https://arxiv.org/pdf/1806.00979.pdf"
moz-do-not-send="true">https://arxiv.org/pdf/1806.00979.pdf</a>),
and with some initial datasets (medical charges and employee
salaries) compared the different implementations with its
default settings. <br>
So there is still a lot to do (add datasets, investigate the
actual differences between the different implementations and
results, in a more structured way compare the options, etc,
there are some todo's listed in the README). However, now I am
mostly on holidays for the rest of December. If somebody wants
to further look at it, that is certainly welcome, otherwise,
it will be a priority for me beginning of January.<br>
<br>
For datasets: additional ideas are welcome. For now, the idea
is to add a subset of the Criteo Terabyte Click dataset, and
to generate some data.<br>
</div>
<div dir="ltr"><br>
</div>
<div dir="ltr"><span class="gmail-im">>>> Does that
mean you'd be opposed to adding the leave-one-out
TargetEncoder<br>
>>> I would really like to add it before February<br>
>> A few month to get it right is not that bad, is it?<br>
</span></div>
<div dir="ltr"><span class="gmail-im">> </span>The PR is
over a year old already, and you hadn't voiced any opposition
<br>
> there.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">As far as I understand, the open PR is not a
leave-one-out TargetEncoder?<br>
</div>
</div>
</blockquote>
I would want it to be :-/<br>
<blockquote type="cite"
cite="mid:CALQtMBYGyLKYZLGF=m4hXUWb4-B+6gpuA1CWpwjH8WV9xjFYCw@mail.gmail.com">
<div dir="ltr">
<div dir="ltr">I also did not yet add the CountFeaturizer from
that scikit-learn PR, because it is actually quite different
(e.g it doesn't work for regression tasks, as it counts
conditional on y). But for classification it could be easily
added to the benchmarks.<br>
</div>
</div>
</blockquote>
I'm confused now. That's what TargetEncoder and leave-one-out
TargetEncoder do as well, right?<br>
<br>
</body>
</html>