The Living Thing / Notebooks :

Model interpretation, fairness and trust

Which ethical criteria does my model satisfy?

There are certain impossibility theorems around what you can simultaneously do here. However, that doesn’t mean you can’t fall short of the impossibility frontier on the side of unfairness (or straight up idiocy) if you don’t work at it. Consider Automated Inference on Criminality using Face Images (WuZh16)

[…]we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of the non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric, with the non-criminal manifold lying in the kernel with a smaller span, exhibiting a law of normality for faces of non-criminals. In other words, the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people.

Oh, and what would you be happy with your local law enforcement authority taking home from this?

Maybe the in-progress textbook will have something to say? Solon Barocas, Moritz Hardt, Arvind Narayanan Fairness and machine learning.

Think pieces on fairness in models in practice

Chris Tucchio, at crunch conf makes some points about allocative/procedural fairness and net utility versus group rights.

If we choose to service Hyderabad with no disparities, we’ll run out of money and stop serving Hyderabad. The other NBFCs won’t.

Net result: Hyderabad is redlined by competitors and still gets no service.

Our choice: Keep the fraudsters out, utilitarianism over group rights.

He does a good job of explaining some impossibility theorems via examples, esp KlMR16.

Refs

AgYu08
Aggarwal, C. C., & Yu, P. S.(2008) A General Survey of Privacy-Preserving Data Mining Models and Algorithms. In C. C. Aggarwal & P. S. Yu (Eds.), Privacy-Preserving Data Mining (pp. 11–52). Springer US DOI.
BaSe16
Barocas, S., & Selbst, A. D.(2016) Big Data’s Disparate Impact (SSRN Scholarly Paper No. ID 2477899). . Rochester, NY: Social Science Research Network
Burr16
Burrell, J. (2016) How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. DOI.
DHPR12
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012) Fairness Through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214–226). New York, NY, USA: ACM DOI.
FFMS15
Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015) Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 259–268). New York, NY, USA: ACM DOI.
HaPS16
Hardt, M., Price, E., & Srebro, N. (2016) Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (pp. 3315–3323).
KRPH17
Kilbertus, N., Rojas-Carulla, M., Parascandolo, G., Hardt, M., Janzing, D., & Schölkopf, B. (2017) Avoiding Discrimination through Causal Reasoning. ArXiv:1706.02744 [Cs, Stat].
KlMR16
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016) Inherent Trade-Offs in the Fair Determination of Risk Scores.
Mico17
Miconi, T. (2017) The impossibility of “fairness”: a generalized impossibility result for decisions.
Swee13
Sweeney, L. (2013) Discrimination in Online Ad Delivery. Queue, 11(3), 10:10–10:29. DOI.
WPPA16
Wisdom, S., Powers, T., Pitton, J., & Atlas, L. (2016) Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery. In Advances in Neural Information Processing Systems 29.
WuZh16
Wu, X., & Zhang, X. (2016) Automated Inference on Criminality using Face Images. ArXiv:1611.04135 [Cs].
ZWSP13
Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013) Learning Fair Representations. In Proceedings of the 30th International Conference on Machine Learning (ICML-13) (pp. 325–333).