Reports

  • A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America

    In this paper we present a methodology that spells out how social scientists, domain experts, and machine learning experts can collaboratively explore biases and harmful stereotypes in word embeddings and large language models. Our methodology uses the software we implemented, available at https://huggingface.co/spaces/vialibre/edia

  • A Guide to Challenges and Recommendations for the Implementation of a Fairer AI

    In this document, we elaborated a list of technical recommendations for the development of Artificial Intelligence (AI) projects, specifically Machine Learning based systems. These recommendations are the result of structured interviews with people who work in practical applications of data-based systems in various roles and organizations within the Argentine technological ecosystem, and have been elaborated through the lens of our AI ethics team, composed of activists, social scientists and computer science researchers.

  • A tool to overcome technical barriers for bias assessment in human language technologies

    Automatic processing of language is becoming pervasive in our lives, often taking central roles in our decision making, like choosing the wording for our messages and mails, translating our readings, or even having full conversations with us. Word embeddings are a key component of modern natural language processing systems. They provide a representation of words that has boosted the performance of many applications, working as a semblance of meaning.

    Word embeddings seem to capture a semblance of the meaning of words from raw text, but, at the same time, they also distill stereotypes and societal biases which are subsequently relayed to the final applications. Such biases can be discriminatory. It is very important to detect and mitigate those biases, to prevent discriminatory behaviors of automated processes, which can be much more harmful than in the case of humans because their of their scale. There are currently many tools and techniques to detect and mitigate biases in word embeddings, but they present many barriers for the engagement of people without technical skills. As it happens, most of the experts in bias, either social scientists or people with deep knowledge of the context where bias is harmful, do not have such skills, and they cannot engage in the processes of bias detection because of the technical barriers.

    We have studied the barriers in existing tools and have explored their possibilities and limitations with different kinds of users. With this exploration, we propose to develop a tool that is specially aimed to lower the technical barriers and provide the exploration power to address the requirements of experts, scientists and people in general who are willing to audit these technologies.

    A tool to overcome technical barriers for bias assessment in human language technologies

  • Yearbook 2021

  • Portada editoriales

    2020: The Year in Review

  • Portada anuario 2020

    Yearbook 2020

    In this atypical year 2020, Vía Libre Foundation turned 20 years old.

    The year of the pandemic found us working as always in some of the areas that we addressed from the beginning: the promotion and use of Free Software, the debate on access to culture, knowledge and intellectual property, civil and political rights, especially the incorporation of technologies into electoral processes, as well as the defense and promotion of the right to privacy and information self-determination. In recent years, we have added numerous debates about the impact of artificial intelligence on social, economic and cultural rights and an agenda focused on promoting information security policies and protecting the community of infosec practitioners. This Yearbook shows what we did.