Cerrando una brecha: una reflexión multidisciplinar sobre la discriminación algorítmica

Autores/as

DOI: https://doi.org/10.6018/daimon.562811
Palabras clave: discriminación, algoritmos, inteligencia artificial, métricas

Agencias de apoyo

  • Esta publicación ha sido parcialmente financiada por los proyectos: 2021 SGR 01104 y 2021 SGR 00754 de la Generalitat de Catalunya y H2020-MSCA-RISE-2020 project MOSAIC (Grant Agreement 101007627).

Resumen

Este artículo aborda el concepto de discriminación algorítmica desde una perspectiva conjunta de la filosofía y la ciencia de la computación, con el propósito de establecer un marco de discusión común para avanzar en el despliegue de las inteligencias artificiales en las sociedades democráticas. Se presenta una definición no normativa de discriminación y se analiza y contextualiza el concepto de algoritmo usando un enfoque intencional, enmarcándolo en el proceso de toma de decisiones e identificando las fuentes de discriminación, así como los conceptos detrás de su cuantificación para terminar exponiendo algunos límites y desafíos.

Descargas

Los datos de descargas todavía no están disponibles.

Citas

AlgorithmWatch. (2020), “Automating Society”. https://automatingsociety.algorithmwatch.org/

Aristoteles (1984), Nichomachean Ethics, Princeton University Press, Vol.3.1131a10–b15

Baeza-Yates, R. (2018), “Bias on the web”. Commun. ACM. 61, pp. 54–61.

Binns R. (2020), “On the apparent conflict between individual and group fairness”, ACM Proceedings of Int. Conf. on Fairness Accountability and Transparency in Machine Learning.

Brooks, D. (2013), "Opinion | The Philosophy of Data". New York Times, https:// www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html

Buckner, C. (2019), Deep learning: A philosophical introduction. Philosophy compass, 14(10), e12625.

Carey, A.; Wu, X. (2022), “The Causal Fairness Field Guide: Perspectives from social and formal sciences”, Frontiers in Big Data, Vol 5.

Chouldechova, A. (2017), “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments”. Big data, 5(2), pp. 153-163.

Creel, K.; Hellman D. (2022), "The algorithmic Leviathan: arbitrariness, fairness, and opportunity in algorithmic decision-making systems." Canadian Journal of Philosophy 52.1. pp. 26-43.

Dawes, R. M.; Faust, D.; Meehl, P. E. (1989). “Clinical versus actuarial judgment”. Science, 243(4899), pp. 1668-1674.

Dennett, D. C. (1987), The intentional stance. MIT Press.

Donovan, K.P.; Park, E. (2019), “Perpetual debt in the Silicon Savannah”, Boston Review.

Eidelson, B. (2015), Discrimination and Disrespect. Oxford University Press.

EU P Serv (2019) “A Governance Framework for algorithmic accountability and transparency”. Recuperado de: “https://www.europarl.europa.eu/stoa/en/document/EPRS_STU(2019)624262”

Fazelpour, S.; Danks, D. (2020), "Algorithmic bias: Senses, sources, solutions." Philosophy Compass 16.8: e12760.

Fernández-Loría, C.; Foster P. (2022), "Causal decision making and causal effect estimation are not the same… and why it matters." INFORMS Journal on Data Science 1.1, pp. 4-16.

Guersenzvaig, A.; Casacuberta, D. (2022), “La quimera de la objetividad algorítmica: dificultades del aprendizaje automático en el desarrollo de una noción no normativa de salud”, IUES ET SCIENTIA, Vol 8 N 1, pp. 35-56.

Harari, Y. N. (2015), Homo Deus: A Brief History of Tomorrow. Random House. Traducción al castellano de la editorial Debate.

Hardt, M.; Recht, B. (2022), Patterns, predictions, and actions: Foundations of machine learning. Princeton University Press.

Jacobs, A. Z.; Wallach, H. (2021), “Measurement and fairness”. In Proceedings of the 2021 ACM Conference on fairness, accountability, and transparency, pp. 375-385.

Johnson, R. A.; Zhang, S. (2022) “What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in US Social Policy”. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 1671-1682.

Kearns, M.; Roth, A. (2019), The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.

Langley, P.; Simon H. A. (1995), "Applications of machine learning and rule induction." Communications of the ACM 38.11, pp.54-64.

Lazar, S. (2022), “Legitimacy, Authority, and the Political Value of Explanations”. arXiv preprint arXiv:2208.08628.

Lee, N. T. (2018), “Detecting racial bias in algorithms and machine learning”. Journal of Information, Communication and Ethics in Society, 16(3), pp. 252-260.

Lippert-Rasmussen, K. (2014), Born Free and Equal?. Oxford University Press.

Martin, K.; Waldman, A. (2022), “Are algorithmic decisions legitimate? The effect of process and outcomes on perceptions of legitimacy of AI decisions”. Journal of Business Ethics, pp. 1-18.

Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. (2021), "A Survey on Bias and Fairness in Machine Learning". ACM Comput. Surv. 54, 6, Article 115, pp. 1-35.

Mitchell, T. M. (1980), “The need for biases in learning generalizations “.New Jersey: Department of Computer Science, Laboratory for Computer Science Research, Rutgers Univ., pp. 184-191.

Mitchell, S.; Potash, E.; Barocas, S.; D'Amour, A.; Lum, K. (2021), “Algorithmic fairness: Choices, assumptions, and definitions.” Annual Review of Statistics and Its Application, 8, pp. 141-163.

Mittelstadt, B. D.; Allo, P.; Taddeo, M.; Watcher, S.; Floridi, L. (2016), "The ethics of algorithms: mapping the debate". Big Data & Society, pp. 1-26.

Mittelstadt, B., Wachter, S., Russell, C.s (2023), “The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by default”. Available at SSRN: https://ssrn.com/abstract=4331652

Narayanan, A. (2022), "The limits of the quantitative approach to discrimination." James Baldwin lecture [transcript], Princeton University.

Pearl, J.; Mackenzie, D. (2018), The book of why: the new science of cause and effect. Basic books.

Pratt, L. Y. (1993), "Discriminability-based transfer between neural networks" (PDF). NIPS Conference: Advances in Neural Information Processing Systems 5. Morgan Kaufmann Publishers. pp. 204–211.

Rajkomar, A., Jeffrey D., Kohane, I. (2019), "Machine learning in medicine." New England Journal of Medicine 380.14, pp. 1347-1358.

Rudin C. (2019), “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead”. Nat. Mach. Intell. 1(5), pp. 206–15.

Ruf, B.; Detyniecki, M. (2021), “Towards the Right Kind of Fairness in AI”, arXiv:2102.08453v7.

Savage, I. J. (1954), The Foundations of Statistics, New York: John Wiley and Sons.

Seng, M.; Floridi, L.; Singh, J. (2021), "Formalising tradeoffs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics". AI & Society, 1:529–544.

Simon, J.; Wong, P.; Rieder, G. (2020), "Algorithmic bias and the Value Sensitive Design approach". Internet Policy Review, 9(4):1-16.

Spector A.; Norvig, P.; Wiggins, J.; Wing, M. (2022), “Data Science in Context: Foundations, Challenges, Opportunities”. Cambridge University Press.

Sternberger, D. (1968), "Legitimacy" in International Encyclopedia of the Social Sciences (ed. D.L. Sills) New York: Macmillan, Vol. 9, p. 244.

Thurman, N.; Lewis C. S.; Kunert, J. (2019), “Algorithms, automation, and news." Digital Journalism 7.8, pp. 980-992.

Tsamados, A.; Aggarwal, N.; Cowls, J.; Morley, J.; Roberts, H.; Taddeo, M.; Floridi, L. (2022), "The ethics of algorithms: key problems and solutions". AI & Society, 37, pp. 215–230.

Umbrello, S.; Poel, v. d. I. (2021), "Mapping value sensitive design onto AI for social good principles". AI and Ethics, 1, pp. 283–296.

Unceta, I. (2020), “Environmental Adaptation and Differential Replication in Machine Learning”, Entropy (Basel). 3;22(10):1122.

Veale, M.; Binns, R. (2017), “Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data”. Big Data & Society, 4(2), 2053951717743530.

Verma, S.; Rubin, J. (2018), "Fairness definitions explained”. IEEE/ACM Int Workshop on Software Fainess,

von Neumann, J.; Morgenstern, O. (1944), Theory of Games and Economic Behavior, Princeton: Princeton University Press.

Wachter, S. (2022), "The theory of artificial immutability: Protecting algorithmic groups under anti-discrimination law." arXiv preprint arXiv:2205.01166.

Zerilli, J. (2022), “Explaining Machine Learning Decisions”. Philosophy of Science, 89(1), pp. 1-19. doi:10.1017/psa.2021.13

Publicado
01-09-2023
Cómo citar
Dellunde, P., Pujol, O., & Vitrià, J. (2023). Cerrando una brecha: una reflexión multidisciplinar sobre la discriminación algorítmica. Daimon Revista Internacional de Filosofia, (90), 63–80. https://doi.org/10.6018/daimon.562811
Número
Sección
MONOGRÁFICO sobre ¿El aprendizaje automático como un nuevo positivismo dataísta?