Workshop at RightsCon

More than 100 people participated in the workshop “EDIA, assessing Stereotypes and Discrimination in Artificial Intelligence” given by the ethics team of Fundación Vía Libre in Costa Rica, within the framework of the World Summit RightsCon.

For this opportunity Beatriz Busaniche and Guido Ivetta traveled as part of the AI ethics team to the twelfth RightsCon convention, one of the most important global summits on technology, human rights, artificial intelligence and copyright. There, on Tuesday, June 6, in the Tortuga room, they conducted a workshop entitled: “EDIA, assessing Stereotypes and Discrimination in Artificial Intelligence”.

The event started with an introduction to the topic by Beatriz Busaniche, the identification of biases and stereotypes in Artificial Intelligence; also members of the FAIR Network shared how they got involved in the project of the Fundación Vía Libre to carry out this tool called: EDIA.

Then, Guido Ivetta made available the tool and a user’s guide, encouraging the workshop participants to appropriate the tool by exploring the biases through the language models and the word embeddings. For better interaction, he first defined these two concepts:

Language models are those Artificial Intelligence models trained to autocomplete the text of a person who writes. For example: What happens when we search for the phrase “the poor are” in the Google search engine?

He also clarified that this is not the only example of the model, there are also others, such as the well-known GPT Chat, where it was also trained to follow the user’s instructions.

On the other hand, he defined:

Word embeddings, as a method of representing the meaning of words where it is possible to observe, for example, how the AI defines proximities or distances between the concept it has of each word and gives as a result a visual mapping.

The AI ethics team provided this tool to detect, inspect and share among the attendees, those biases in natural language processing systems based on these two models described above.

Through this experience, those present were able to interact with this tool created by Via Libre and thus observe the probabilities of a system generating discriminatory content. With a proposed objective: to continue replicating the use in their jobs and countries of origin, promoting a less racist and more empathetic present also in these technologies.

From Fundación Vía Libre we thank those who came to share this day and learn about EDIA.

Archive