From April 23–27, we participated in the International Conference on Learning Representations 2026 (ICLR), held this year in Rio de Janeiro, Brazil. ICLR is one of the world’s leading conferences in deep learning, bringing together thousands of researchers, students, and professionals working in artificial intelligence. This year’s edition gathered more than 7,000 participants.
As part of the workshop “Logical Reasoning of Large Language Models,” Sofía Martinelli presented research developed together with Luciana Benotti and Guido Ivetta, focused on a taxonomy of resources for evaluating bias in reasoning models. The participation included both an oral presentation and a poster session.
Evaluating Bias in Reasoning Models
The workshop focused on one of the current challenges of large language models: their limitations in performing complex logical reasoning in a consistent and reliable way. Discussions addressed issues such as contradictory responses, difficulties handling multiple premises, and approaches for evaluating and improving reasoning capabilities in these systems.
This work explores the intersection between logical reasoning and social bias, highlighting the need for joint evaluation approaches and providing a taxonomy of benchmarks and metrics as a key resource.
Outstanding Presentation at the Workshop
The paper was selected for an oral presentation at the workshop, a distinction awarded to only 8 papers out of the 206 accepted submissions. In addition, it received the Outstanding Paper recognition, granted to the most notable works presented in the oral session.
Participation in these types of international spaces helps continue strengthening networks for exchange and discussion on artificial intelligence from critical and situated perspectives, incorporating questions about bias, evaluation, and local contexts into debates that are often dominated by perspectives and developments from the Global North.





