Influenced speech: machine learning and hate speech

Authors

DOI: https://doi.org/10.6018/daimon.562091
Keywords: machine learning, hate speech, oppression, algorithmic bias

Abstract

This paper addresses the issue of discriminatory computer programs from the perspective of the philosophy of language. In this discipline, the literature on hate speech has focused its analysis on the effects on oppressed groups. The central idea of the present paper will be to develop a new notion, influenced speech, which will allow us to explain what the oppressor group is led to assert on the basis of systematic oppression. Thus, influenced speech will make it possible both to explain the social reproduction of hate speech and to theoretically frame the discriminatory statements made by the aforementioned computer programs.

Downloads

Download data is not yet available.

References

Anderson, E. (2012). Epistemic justice as a virtue of social institutions. Social Epistemology, 26 (2), 163-173. https://doi.org/10.1080/02691728.2011.652211

Alcoff, L. (2010). Epistemic identities. Episteme, 7 (2), 128–37. https://doi.org/10.3366/epi.2010.0003.

Alonso Alemani, L., Benotti, L., González, L., Sánchez, J., Busaniche, B., Halvorsen, A. y Bordone, M. (2022). Una herramienta para superar las barreras técnicas para la evaluación de sesgos en las tecnologías del lenguaje humano. Recuperado de la página web de Fundación Vía Libre. https://www.vialibre.org.ar/wp-content/uploads/2022/08/vialibre_Una-herramienta-para-superar-las-barreras-tecnicas.pdf

Bianchi, C. (2020). Discursive injustice: the role of uptake. Topoi 40 (1): 181-190. https://doi.org/10.1007/s11245-020-09699-x

Crawford, K. (2021). Atlas of AI. Yale University Press.

Danks, D. y London, A. (2017). Algorithmic bias in autonomous systems. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Australia, 17, 4691–4697.

Fazelpour, S. y Danks, D. (2021). Algorithmic bias: senses, sources, solutions. Philosophy Compass, 16 (8), e12760. https://doi.org/10.1111/phc3.12760

Fricker, M. (2007). Epistemic injustice. Oxford University Press.

Fricker, M. (2013). Epistemic justice as a condition of political freedom?. Synthese, 190 (7), 1317-1332. https://doi.org/10.1007/s11229-012-0227-3

García, M. (2016). Racist in the machine: the disturbing implications of algorithmic bias. World Policy Jornal, 23 (4), 111-117. https://doi.org/10.1215/07402775-3813015

Gelber, K. (2019). Differentiating hate speech: a systemic discrimination approach. Critical Review of International Social and Political Philosophy, 22 (3), 607-622. https://doi.org/10.1080/13698230.2019.1576006

Gerón, A. (2019). Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow. O’Reilly.

Hajian, S. y Domingo-Ferrer, J. (2013). A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering, 25 (7), 1445-1459. https://doi.org/10.1109/TKDE.2012.72

Hern, A. (2016, 24 de marzo). Microsoft scrambles to limit PR damage over abusive AI bot Tay. The Guardian. https://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay al 20/08/2022.

Hesni, S. (2018). Illocutionary frustration. Mind, 127 (508), 947-976. https://doi.org/10.1093/mind/fzy033

Hornsby, J. y Langton, R. (1998). Free speech and illocution. Legal Theory, 4, 21-37. https://doi.org/10.1017/S1352325200000902

Johnson, G. (2020). Algorithmic bias: on the implicit biases of social technology. Synthese, 198 (10), 9941-9961. https://doi.org/10.1007/s11229-020-02696-y

Langton, R. (1993). Speech acts and unspeakable acts. Philosophy and Public Affairs, 22 (4), 293-330.

Langton, R. (2012). Beyond belief: pragmatics in hate speech and pornography, en I. Maitra y M. K. McGowan (eds.), Speech and harm: controversies over free speech (pp. 72-93). Oxford University Press.

Langton, R. y West, C. (1999). Scorekeeping in a pornographic language game. Australasian Journal of Philosophy, 77 (3), 303-319. https://doi.org/10.1080/00048409912349061

Lee, H. (1960). To kill a mockingbird. HarperCollins.

Lewis, D. (1983). Scorekeeping in a language game, en D. Lewis, Philosophical papers: volume 1 (pp. 233-249). Oxford University Press.

Maitra, I. y McGowan, M. (2010). On silencing, rape, and responsibility. Australian Journal of Philosophy, 88 (1), 167-172. https://doi.org/10.1080/00048400902941331

Marques, T. (2022). The expression of hate speech. Journal of Applied Philosophy, 10, 1-29. https://doi.org/10.1111/japp.12608

Matsuda, M. (1993). Public response to racist speech, en Matsuda, M., Lawrence, C., Delgado, R. y Williams Crenshaw, K. (eds.), Words that wound: critical race theory, assaultive speech and the first amendment (pp. 17-52). Westview Press.

McGowan, M. (2003). Conversational exercitives and the force of pornography. Philosophy & Public Affairs, 31 (2), 155-189. https://doi.org/10.1111/j.1088-4963.2003.00155.x

McGowan, M. (2004). Conversational exercitives: something else we do with our words. Linguistics and Philosophy, 27, 93-111. https://doi.org/10.1023/B:LING.0000010803.47264.f0

McGowan, M. (2019). Just words. Oxford University Press

McKinney, R. (2016). Extracted speech. Social Theory and Practice, 42 (2), 258-284. https://doi.org/10.5840/soctheorpract201642215

Medina, J. (2013). The epistemology of resistance. Oxford University Press.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. y Galstyan, A. (2019). A survey on bias and fairness in machine learning. CM Computing Surveys, 54 (6), 1–35. https://doi.org/10.1145/3457607

Minghella, A. (Director) (1999). The talented Mr. Ripley [El talentoso Sr. Ripley] [Película]. Paramount Pictures.

Noble, S. (2018). Algorithms of oppression. New York University Press.

Peet, A. (2017). Epistemic injustice in utterance interpretation. Synthese, 194 (9), 3421-3443. https://doi.org/10.1007/s11229-015-0942-7

Pérez, E. (2021). Cuando traducimos un idioma con pronombres sin género como el euskera o el húngaro, Google asume el masculino o femenino. Recuperado de Xataka web. https://www.xataka.com/robotica-e-ia/cuando-traducimos-idioma-genero-neutro-como-euskera-hungaro-google-asume-masculino-femenino.

Ramírez-Bustamante, N. y Páez, A. (por aparecer). Análisis jurídico de la discriminación algorítmica en los procesos de selección laboral, en Angel, N. y Urueña, R. (eds.), Derecho, poder y datos: aproximaciones críticas al derecho y las nuevas tecnologías. Ediciones Uniandes.

Russell, S. y Norvig, P. (2020). Artificial intelligence (4ta ed.). Pearson.

Sandvig, C., Hamilton, K., Karahalios, K. y Langbort, C. (2014). An algorithm audit, en Peña, S., Eubanks, V. & Barocas, S. (eds.). Data and discrimination: collected essays (pp 6-10). Open Technology Institute.

Stalnaker, R. (2002). Common ground. Linguistics and Philosophy, 25 (5/6), 701-721.

Stalnaker, R. (2014). Context. Oxford University Press.

Stair, R. y G. Reynolds (2010). Principios de sistemas de información (9na ed.) Cengase Learning.

Suresh, H. y J. Guttag (2021). A framework for understanding sources of harm throughout the machine learning life cycle. Proceedings of EAAMO ’21: Equity and access in algorithms, mechanisms, and optimization, Estados Unidos, 1-9.

Veale, M. y R. Binns (2017). Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data & Society, 4 (2), 1-17. https://doi.org/10.1177/2053951717743530

Published
01-09-2023
How to Cite
Jaimes, F. J. (2023). Influenced speech: machine learning and hate speech. Daimon Revista Internacional de Filosofia, (90), 45–61. https://doi.org/10.6018/daimon.562091
Issue
Section
Automatic Learning as New Dataist Positivism?