The Vía Libre Foundation’s AI ethics team was invited to present E.D.I.A. at the Mozilla Foundation within the Data Futures Lab program. The purpose is to connect with professionals, builders, legal professionals, researchers and developers to be linked through the Data Futures Lab. Every month they make different calls to convocations.
This month of July was the invitation to the Vía Libre Foundation to present EDIA (Stereotypes and Discrimination in Artificial Intelligence). A tool to explore discriminatory biases and stereotypes present in word embeddings and language models. As linguistic models are used in more and more applications, there is greater potential to impact all aspects of our lives, from health to relationships, employment and education. It has become urgent to audit them and assess the risks of damage they could entail.
EDIA is a graphical tool that facilitates this by allowing users to demonstrate bias using lists of words and sentences. For example, one can assess whether a given language model is more likely to produce sentences associating violence with poverty or sentences associating violence with wealth.
These evaluations can then be systematized to facilitate decision making.
The call will take place on Monday, July 17 at 15:00 UTC. You can summarize and learn more about the E.D.I.A tool or connect with professionals from the Data Future Lab.
To sign up, go to: https://docs.google.com/forms/d/e/1FAIpQLSeIlPP7WHnu5bqGqL6xtfclhnEPbTGUj9qEkNahB8hv-Zz7qg/viewform
To know Future Data Lab: https://foundation.mozilla.org/en/data-futures-lab/events/