“The artificial intelligence is out of context”

We start March reflecting on the impacts and consequences of AI from Denmark. Luciana Benotti, part of the AI ethics team, traveled to Copenhagen and participated in meetings of the Danish Philosophical Society. The discussion focussed on the implications of the rapid innovations without measuring the consequences.

 

The first week of March was the meeting of the Danish Philosophical Society in Denmark, aimed at strengthening the role of philosophy in the country and its interaction with other disciplines. This society existssince 1999 but publishes a book anually since 1964. For this meeting, Luciana Benotti, from the Via Libre AI ethics team and the Universidad Nacional de Córdoba, was invited to advise the Dis-trust project funded by the Velux Foundation.


The Dis-Trust Project. Can we trust digital systems?
It’s a question that transcends geographical boundaries. Regulation and ethics are beginning to be considered in a growing system. How? In what way? This project aims to generate new knowledge and strengthen digital literacy. Dis-Trust refers to Digitally Enhanced Social Reality and Environments of Trust but also alludes to its meaning in English: to distrust. And precisely the objective is to assess levels of trust in digital systems. It works by evaluating various cases. Denmark has a highly digitized government, but it’s not the only case study; the project also evaluates commercial systems like AirBnB. It also investigates how trust relationships are formed with a digital technology. And particularly, when that technology has artificial intelligence as one of its components.


The meeting itself and a reflection
It unfolded in three parts. An initial meeting with those involved in Distrust and external advisors. The Society meeting where Luciana presented a talk called “Risks of generative AI: perspectives of Latin American research in AI” to over 100 people mainly from Denmark but also from other countries. And finally, a class in the Master’s program on Philosophy, History and Danish Studies at the University of Roskilde to around 40 students.

A presentation isn’t complete without audience feedback, and one particular comment that caught Luciana’s attention came from Micol Mieli, an Italian postdoctoral researcher. Mieli’s feedback delved into the notion of context in language usage, highlighting how individuals adapt their speech based on various factors such as the people involved, the intended message, and the socio-political environment. These questions, whether conscious or unconscious, play a significant role in speech.

In contrast, language models, during their training process, do not consider the context in which language is used or who provides it. This raises the question: What does “context” mean for language models? Luciana defines it as “a sequence of words within a text,” emphasizing its technical nature. She explains that it serves as a window between the word being predicted, the preceding words, and the subsequent words. While this context is expanding, especially in tools like ChatGPT, it remains limited to words alone, lacking consideration for other contextual variables such as historical or cultural perspectives.

What implications does this decontextualization bring? This is a new research question us. 


Once again, AI and precarious work

We’ve been talking about this topic for quite some time at Via Libre. In our podcast, in the episode “The Invisible Workers of Artificial Intelligence,” we discuss the workers who train AI systems. This issue also arose in Denmark.

“I was particularly struck by the people who annotate data through crowdworking platforms. Although they earn a slightly higher salary than the minimum per hour, they only include the hours they effectively sit in front of the computer. There are no labor benefits or job security. They are also not involved in the definition of the annotation guidelines. The information provided to them is completely decontextualized and they cannot ask questions or give feedback on the annotation guidelines.” Expressed Luciana Benotti.

Precariousness for those who perform it and artificial intelligence completely out of context. What language models are we training? Some reflections after such a valuable meeting. Valuable because it is also rare to have resources for these reflections or to research these topics. Being able to set aside the need to innovate rapidly without reflecting on the consequences and impacts that technology will have beyond the accumulation of capital.



Archive