Stereotypes, Language Models, and Work: A Chinese Experience

On January 12, we held a seminar at Tsinghua University (China), led by Luciana Benotti. During the seminar, titled “Stereotypes in ‘Reasoning’ Language Models,” we worked with a small group of master’s and PhD students, together with faculty members and researchers from nearby universities. The session included the live collection of stereotypes and the testing of the EDIA tool in Chinese.

One of the themes that emerged most strongly during the seminar was the analysis of the “996” stereotype, which refers to the idea of working from 9 a.m. to 9 p.m., six days a week. This stereotype was explored in relation to a narrative widely present in the United States that seeks to explain why China is winning—or competing in—the technological race, as well as its transition from being known as “the factory of low-quality goods” to becoming a leader in artificial intelligence and other technologies such as 5G. This narrative tends to suggest that China’s progress is driven mainly by extreme working hours rather than by intellectual capacity, creativity, or strategic decision-making. While there are real practices that feed into this perception, the discussion made it possible to question how such explanations ultimately downplay the central role of sustained investment in education, science, and technology, as well as state planning and the talent of those developing these technologies.

These stereotypes do more than oversimplify complex processes: they also dehumanize. They portray Chinese society as a “working machine,” obscuring the fact that technological development is driven by people with creativity, desires, and life projects.

At Fundación Vía Libre, we have been working on how language models reflect external stereotypes (those one society produces about another) and internal stereotypes (those a community holds about itself), and how these distinctions are often lost when systems are trained on closed and non-auditable data.

As Luciana Benotti noted:

“One of the most interesting aspects is seeing stereotypes reflected in language models as a way of studying society and finding evidence that those stereotypes exist in the data. The main obstacle is that with most large language models today—those that include more information—we do not know what data they were trained on, so we do not really know whose stereotypes they are reflecting.”

In this sense, the exchange opened up in-depth discussions about the current limitations of studying bias in commercial LLMs and the importance of promoting models trained on open and contextualized data.

Luciana also highlighted the more optimistic view of artificial intelligence in China, where AI is often seen as a tool to reduce workload or make work easier, rather than primarily as a way to lower workers’ pay.

We would like to thank Fenrong Liu for the invitation and for this valuable opportunity to continue thinking about artificial intelligence from a critical, situated, and truly global perspective—one that does not reproduce single narratives or inherited stereotypes, and that places people at the center.



 

Archive