[scikit-learn] Max f1 score for soft classifier?

Stuart Reynolds stuart at stuartreynolds.net
Mon Jul 17 20:06:30 EDT 2017


That was also my thinking. Similarly it's also useful to try and choose a
threshold that achieves some tpr or fpr, so that methods can be
approximately compared to published results.

It's not obvious what to do though when an increment in the threshold
results in several changes in classification.

On Mon, Jul 17, 2017 at 5:00 PM Joel Nothman <joel.nothman at gmail.com> wrote:

> I suppose it would not be hard to build a wrapper that does this, if all
> we are doing is choosing a threshold. Although a global maximum is not
> guaranteed without some kind of interpolation over the precision-recall
> curve.
>
> On 18 July 2017 at 02:41, Stuart Reynolds <stuart at stuartreynolds.net>
> wrote:
>
>> Does scikit have a function to find the maximum f1 score (and decision
>> threshold) for a (soft) classifier?
>>
>> - Stuart
>>
> _______________________________________________
>> scikit-learn mailing list
>> scikit-learn at python.org
>> https://mail.python.org/mailman/listinfo/scikit-learn
>>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20170718/18372a6c/attachment.html>


More information about the scikit-learn mailing list