[scikit-learn] Fairness Metrics

Andreas Mueller t3kcit at gmail.com
Tue Oct 30 11:57:40 EDT 2018


Hi Josh.
I think this would be cool to add at some point, I'm not sure this is now.
I'm a bit surprised by their "fairness report". They have 4 different 
metrics of fairness which are conflicting.
If they are all included in the fairness report then you always fail the 
fairness report, right?

I think it would also be great to provide a tool to change predictions 
to be fair according to one of these
criteria.

I don't think there is consensus yet that these metrics are "good", in 
particular since they are conflicting,
and so people are trying to go beyond these, I think.

Cheers,
Andy

On 10/29/18 1:36 AM, Feldman, Joshua wrote:
> Hi,
>
> I was wondering if there's any interest in adding fairness metrics to 
> sklearn. Specifically, I was thinking of implementing the metrics 
> described here:
>
> https://dsapp.uchicago.edu/projects/aequitas/
>
> I recognize that these metrics are extremely simple to calculate, but 
> given that sklearn is the standard machine learning package in python, 
> I think it would be very powerful to explicitly include algorithmic 
> fairness - it would make these methods more accessible and, as a 
> matter of principle, demonstrate that ethics is part of ML and not an 
> afterthought. I would love to hear the groups' thoughts and if there's 
> interest in such a feature.
>
> Thanks!
>
> Josh
>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn at python.org
> https://mail.python.org/mailman/listinfo/scikit-learn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scikit-learn/attachments/20181030/53908868/attachment-0001.html>


More information about the scikit-learn mailing list