<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2018-06-05 1:06 GMT+09:00 Andreas Mueller <span dir="ltr"><<a href="mailto:t3kcit@gmail.com" target="_blank">t3kcit@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF">
    <p>Is that Jet?!</p>
    <p><a class="m_3291701484883149285moz-txt-link-freetext" href="https://www.youtube.com/watch?v=xAoljeRJ3lU" target="_blank">https://www.youtube.com/watch?<wbr>v=xAoljeRJ3lU</a></p>
    <p>;)<br></p></div></blockquote><div><br></div><div>Quite an entertaining presentation and informative to the non-expert about color theory, though I'm not sure I'd go so far as to call jet "evil" and that everyone hates it.<br></div><div>Actually, I didn't know that the colormap known as Jet actually had a name...I had reversed engineered it to reproduce what I saw elsewhere.<br></div><div>I suppose I'm glad I have already built my infrastructure's version of the metric surface plotter to allow complete color customization at runtime from the CLI, and can then tailor results to my audiences. :)<br><br></div><div>I'll keep this video's explanation in mind - thanks for the reference.<br><br></div><div>Cheers,<br></div><div>J.B.<br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF"><p>
    </p><div><div class="h5">
    <div class="m_3291701484883149285moz-cite-prefix">On 6/4/18 11:56 AM, Brown J.B. via
      scikit-learn wrote:<br>
    </div>
    </div></div><blockquote type="cite"><div><div class="h5">
      <div dir="ltr">
        <div>Hello community,</div>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">
            <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="m_3291701484883149285gmail-">
                <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                  I wonder if there's something similar for the binary
                  class case where,<br>
                  the prediction is a real value (activation) and from
                  this we can also<br>
                  derive<br>
                    - CMs for all prediction cutoff (or set of cutoffs?)<br>
                    - scores over all cutoffs (AUC, AP, ...)<br>
                </blockquote>
              </span>
              AUC and AP are by definition over all cut-offs. And CMs
              for all<br>
              cutoffs doesn't seem a good idea, because that'll be
              n_samples many<br>
              in the general case. If you want to specify a set of
              cutoffs, that would be pretty easy to do.<br>
              How do you find these cut-offs, though?<span class="m_3291701484883149285gmail-"><br>
                <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
                  <br>
                  For me, in analyzing (binary class) performance,
                  reporting scores for<br>
                  a single cutoff is less useful than seeing how the
                  many scores (tpr,<br>
                  ppv, mcc, relative risk, chi^2, ...) vary at various
                  false positive<br>
                  rates, or prediction quantiles.<br>
                </blockquote>
              </span></blockquote>
            <div><br>
            </div>
            <div>In terms of finding cut-offs, one could use the idea of
              metric surfaces that I recently proposed</div>
            <div><a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/minf.201700127" target="_blank">https://onlinelibrary.wiley.<wbr>com/doi/abs/10.1002/minf.<wbr>201700127</a>
              <br>
            </div>
            <div>and then plot your per-threshold TPR/TNR pairs on the
              PPV/MCC/etc surfaces to determine what conditions you are
              willing to accept against the background of your
              prediction problem.</div>
            <div><br>
            </div>
            <div>I use these surfaces (a) to think about the prediction
              problem before any attempt at modeling is made, and (b) to
              deconstruct results such as "Accuracy=85%" into
              interpretations in the context of my field and the data
              being predicted.</div>
            <div><br>
            </div>
            <div>Hope this contributes a bit of food for thought.</div>
            <div>J.B.<br>
            </div>
          </div>
        </div>
      </div>
      <br>
      <fieldset class="m_3291701484883149285mimeAttachmentHeader"></fieldset>
      </div></div><span class=""><pre class="m_3291701484883149285moz-quote-pre">______________________________<wbr>_________________
scikit-learn mailing list
<a class="m_3291701484883149285moz-txt-link-abbreviated" href="mailto:scikit-learn@python.org" target="_blank">scikit-learn@python.org</a>
<a class="m_3291701484883149285moz-txt-link-freetext" href="https://mail.python.org/mailman/listinfo/scikit-learn" target="_blank">https://mail.python.org/<wbr>mailman/listinfo/scikit-learn</a>
</pre>
    </span></blockquote>
  </div>

<br>______________________________<wbr>_________________<br>
scikit-learn mailing list<br>
<a href="mailto:scikit-learn@python.org">scikit-learn@python.org</a><br>
<a href="https://mail.python.org/mailman/listinfo/scikit-learn" rel="noreferrer" target="_blank">https://mail.python.org/<wbr>mailman/listinfo/scikit-learn</a><br>
<br></blockquote></div><br></div></div>