Reports

Yearbook 2024
Another year has come to an end with a long list of actions and projects at Fundación Vía Libre. As every year, it is overwhelming to summarise what has happened, but at the same time it allows us to visualise the immense capacity for action that the foundation has developed in recent years. None of this would be possible without a team of people who, day after day, put their heart and soul into devising more and better action strategies to build a fairer digital ecosystem.

Yearbook 2023
We share the summary of the activities of Foundation Vía Libre
during the year 2023.

Legal protection of inferred personal data
In this report we address the legal protection of inferred data from its production to its use and impact, aiming to recognise its existence and account for its impact on everyday life. Through some examples, we see how they are the basis for decisions that can affect everything from privacy to autonomy and potentially the physical integrity of individuals. In view of this, we briefly review the regulatory situation in other countries in order to finally consider a possible regulation in Argentina that would guarantee rights.
We seek through this document to reflect on the importance of understanding that inferred data are personal data, and as such should be regulated from a human rights perspective.

A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
In this paper we present a methodology that spells out how social scientists, domain experts, and machine learning experts can collaboratively explore biases and harmful stereotypes in word embeddings and large language models. Our methodology uses the software we implemented, available at https://huggingface.co/spaces/vialibre/edia

A Guide to Challenges and Recommendations for the Implementation of a Fairer AI
In this document, we elaborated a list of technical recommendations for the development of Artificial Intelligence (AI) projects, specifically Machine Learning based systems. These recommendations are the result of structured interviews with people who work in practical applications of data-based systems in various roles and organizations within the Argentine technological ecosystem, and have been elaborated through the lens of our AI ethics team, composed of activists, social scientists and computer science researchers.
