GenAI errors are different from human errors.

This article was published in the collection: “Towards AI Literacy: 101+ Creative and Critical Practices, Perspectives and Purposes” ((Abegglen, S., Nerantzi, C., Martínez-Arboleda, A., Karatsiori, M., Atenas, J., & Rowell, C. (2024). Toward AI Literacy: 101+ Creative and Critical Practices, Perspectives, and Purposes #creativeHE)). The collection presents creative and critical practices, perspectives and purposes of educators, researchers and students on AI in education. The original version can be read at the following link.

Below we share the article written by Luciana Benotti and Nair Carolina Mazzeo.

GenAI errors are different from human errors

GenAI is increasingly used in professional and personal contexts and it continues to revolutionize various sectors with its different types of uses. GenAI literacy needs to include awareness of the errors that these technologies have, everybody needs to understand the kind of errors GenAI suffers. In particular, it is essential to know what social biases are embedded in these models, these social biases replicate stereotypes that can further stigmatize the most vulnerable sectors. A person can lie intentionally, the AI has no intention of lying when it generates false information. These models learn, among other things, by recognizing which words or pixels appear in similar contexts, repetitively, and may consider them synonyms. A language model, which is a type of AI, can generate false information when it uses synonyms words that are not synonyms. For example, it may consider that ‘John’ and ‘Peter’ are synonyms, because they do not have the tools to interpret that they refer to different people. Raising awareness of the reasons behind mistakes and biases, and their ethical implications associated with the generation of content is the beginning of true literacy.

From the ethics team of the Vía Libre Foundation we have been immersed in the investigation of these failures, as well as in education and communication about these problems in generative AI models. We are currently planning professional development courses for high school teachers, who can then transfer the content to their classrooms. Our course is aimed especially at teachers with training in Comprehensive Sexual Education, a method implemented and legislated in the Argentine educational system. Their training provides knowledge about various social biases, and phenomena such as bullying, trolling, cyberbullying or cancel culture. They are teachers who come with an awareness of these problems and their impact both in the classroom and on social networks. From the team, to complement these processes, we developed a tool called ‘E.D.I.A’ (Stereotypes and Discrimination in Artificial Intelligence). EDIA allows, for example, to interactively evaluate the biases in different language models previously loaded on the platform.

Most AI models are aligned with native English speakers from the Northern Hemisphere. E.D.I.A was created and designed from a Latin American perspective, designed and implemented in spanish. Although there are similar tools in English, Spanish has its structural linguistic differences, for example, in gender. Therefore, this specific approach seeks to avoid the perpetuation of linguistic and cultural biases also rooted in their place of origin and language.

Links about it

– An introductory video of the EDIA tool: https://www.youtube.com/watch?v=CJowS8Y5NWY&ab_channel=Fundaci%C3%B3nV%C3%ADaLibre

– EDIA Tool: https://ia.vialibre.org.ar

– Repercussions of the use of EDIA by adolescents in a workshop held in Khipu. https://www.vialibre.org.ar/menstruacion-salud-y-sexualidad-representaciones-desde-la-inteligencia-artificial/

* This article was published prior to the start of the teacher training, which began on June 1, 2024. You can see more at the following link.

Archive