[scikit-learn] What if I don't want performance measures per each outcome class?

Joel Nothman joel.nothman at gmail.com
Mon Apr 24 06:55:25 EDT 2017


"Traditional" sensitivity is defined for binary classification only.

Maybe micro-average is what you're looking for, but in the multiclass case
without anything more specified, you'll merely be calculating accuracy.

Perhaps quantiles of the scores returned by permutation_test_score will
give you the CIs you seek.

On 24 April 2017 at 01:50, Suranga Kasthurirathne <surangakas at gmail.com>
wrote:

>
> Hello all,
>
> I'm looking at the confidence matrix and performance measures (precision,
> recall, f-measure etc.) produced by scikit.
>
> It seems that scikit calculates these measures per each outcome class, and
> then combines them into some sort of average.
>
> I would really like to see these measures presented in the traditional(?)
> context, where sensitivity is TP / TP + FN. (and is combined, and NOT per
> class!)
>
> If I were to take scikit predictions, and calculate sensitivity using the
> above, then my results wont match up to what scikit says :(
>
> How can I switch to seeing overall performance measures, and not per
> class? and also, how may I obtain 95% confidence intervals foreach of these
> measures?
>
> --
> Best Regards,
> Suranga
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20170424/773cf79f/attachment.html>


More information about the scikit-learn mailing list