Master program for Philosophy, History and Danish Studies

Fecha

4 March 2024

Hora

10:00

Lugar

Dinamarca

Luciana Benotti will give the talk: Assessing bias in AI: Who decides how linguistic models discriminate? within the framework of the Master’s Degree in Philosophy, History and Studies in Denmark.

AI systems are increasingly being used to support decision-making processes and everyday activities. Such processes may affect fundamental rights, like the right to non-discrimination. However, deploying such models raises concerns about potential biases that may compromise these rights. This talk addresses the imperative of who can assess and audit AI models to ensure that they align with ethical principles and safeguard against discrimination.

Existing studies reveal that linguistic representations derived from corpora can perpetuate harmful effects, including invisibilization, self-censorship, and deterrents. This bias, construed as systematic errors in downstream AI applications, disproportionately affects certain populations, particularly historically marginalized groups. While numerous techniques have been proposed to measure and mitigate bias in language models, current audits often require technical expertise beyond the reach of those with knowledge of discrimination.

This presentation challenges whether the reductionist approach of addressing social biases in language models solely through algorithmic calculations should be trusted. Instead, it advocates for a focus on linguistic description of the discriminatory behavior as a tool for assessing bias in language models and Natural Language Processing (NLP)). The aim is to broaden the involvement of experts familiar with social complexities and communities directly impacted by AI systems. This inclusive participation aims to render the assessment process transparent, accountable, and responsive to the needs of those most affected.

The talk will introduce the principles guiding the integration of discrimination experts into the bias assessment process. It critiques existing approaches, highlighting the need for a tool that facilitates the involvement of non-technical individuals. EDIA, is presented as a tool designed to bridge this gap. The session includes a demonstration of EDIA, providing attendees with an opportunity to assess biases in the tool. The interactive demo, accessible at https://huggingface.co/spaces/vialibre/edia, showcases the default language models and word embeddings in English and Spanish. Furthermore, the open-source tool, available on https://github.com/fvialibre/edia, can be instantiated to explore various word embeddings and language models across 105 languages including Danish. The discussion will also delve into the experiences gained through hands-on sessions with discrimination experts, offering insights into the practical application of EDIA.

The talk aims to encourage reflection and engagement in the ongoing discourse on bias assessment in NLP, emphasizing the role of interdisciplinary collaboration for the trustworthy development of AI systems.