Queer in AI @ EurIPS 2025!
Workshop Date:
Friday, December 5, 2025
Conference Location:
Bella Center, Room 16, 1st floor
🌈 Mission
Queer in AI’s workshop and socials at EurIPS 2025 aim to act as a gathering space for queer folks to build community and solidarity while enabling participants to learn about key issues and topics at the intersection of AI and queerness.
Schedule
10:00–10:20 — Welcome & Introduction
10:20–11:00 — Keynote by Beckett LeClair: AI, Queer Youth, and a Call to Action
11:15–12:15 — Presentations
12:30–13:30 — Lunch Break
13:30–15:00 — Interactive Auditing Workshop
15:00–16:00 — Presentations
Socials
Friday, Dec 5 @ 6 pm CET @ Hans Christian Andersen Christmas Market
Meeting point: Log Lady ~8 pm (6 min walk)
Keynote
AI, Queer Youth, and a Call to Action: We are seeing a surge of anti-queer rhetoric masked as children’s protection, and to an extent vice versa. This false dichotomy serves no one but those who would disregard both groups, and in doing so overlooks those facing intersecting barriers from both their youth and their queerness. There are countless opportunities and risks that AI/ML pose to our youth, with some of those being disproportionately felt by queer kids in particular. The good news is that the research community is well-equipped to meet the rising need for evidence, data and solutions – we just need to know where to look. In this talk, I will discuss some of the sociotechnical impacts on queer youth through a fundamental child rights lens, spotlighting the areas where increased research and collaboration will be vital going forward.
In-Person Presentations
11:15–12:15 | 15 minutes each
WinoQueer-NL: Assessing Bias in Dutch Language Models toward LGBTQ+ Identities
Jiska Beuk & Gerasimos Spanakis | Maastricht University
While English language models have been widely examined for anti-queer bias, Dutch models remain understudied. We developed a culturally and linguistically adapted Dutch dataset based on the English WinoQueer benchmark, validated through an online survey with 43 Dutch queer participants. The final dataset comprises 42,906 sentences evaluated using Dutch-specific and multilingual models. Our findings reveal significant disparities, with transgender and non-binary identities consistently receiving the highest bias scores despite overall neutral means.
Glitter: A Multi-Sentence, Multi-Reference Benchmark for Gender-Fair German Machine Translation
A Pranav | University of Hamburg
GLITTER is a comprehensive benchmark for evaluating gender-fair German machine translation that addresses limitations in how current MT systems handle inclusive gender representation beyond binary forms. The work involves creating professionally-annotated multi-sentence passages with three gender-fair alternatives (neutral rephrasing, typographical solutions like gender-star, and neologistic forms) to advance research in non-exclusionary translation technologies.
Virtual Presentations
Bodies under Algorithmic Siege: Codes of Deception and Detection
Christoffer Koch Andersen | University of Cambridge
This talk argues that the contemporary algorithmic rendering of trans bodies as deceptive is part of a longer legacy of constructing gender nonconformity as innately deceptive. By thinking of transness as a constructed deception and algorithms as tools of detection, we examine how colonial discourses have become encoded into binary code, forming a "coded deception" that operates at the level of the trans body through algorithmic logic.
Category-Theoretic Wanderings into Interpretability
Ian Rios-Sialer | Independent Scholar
Category-Theoretic Wanderings into Interpretability is a piece of technical autotheory that asks how we can use category theory to frame interpretability. It queers the ecologies of knowledge, employing abstract mathematical language to discuss both intimate and technical frameworks. The work writes about love addiction, faithfulness in Anthropic's Circuit Tracing experiments, philosophical questions of meaning, and invites the field of AI Safety to feel their way through opacity.
Xenoreproduction: Exploration and Recovery of Collapsible Modes as Core AI Safety Objective
Ian Rios-Sialer | Independent Scholar
Generative AI models reproduce biases in data and amplify them through mode collapse. AI scholarship often overlooks perspectives from Queer Theory to understand such phenomena. This paper introduces Xenoreproduction as a core AI Safety objective, aimed at avoiding homogenization failure modes. Our formalism ties queerness and subalternity to collapsible modes.
Que(e)rying Artificial Intelligence Use for Infectious Disease Surveillance
Elise Racine | University of Oxford & MATS
This presentation examines how AI for infectious disease surveillance could stigmatize and discriminate against vulnerable populations, particularly sexual and gender minorities (SGMs). Adopting an intersectional, reparative approach, the work proposes concrete steps towards a reparative algorithmic praxis: exploring how systems reproduce inequalities, centering sexual and gender diversity, and combating opacity through participatory governance mechanisms.
Queer Circuitry
Elise Racine | University of Oxford & MATS
Queer Circuitry examines how computational systems discipline bodies, ecologies, and ways of knowing. The series visualizes how algorithmic infrastructures enforce normative binaries that fail to capture lived multiplicities, while imposing new hierarchies of power and exclusion. Drawing from queer perspectives on classification and control, it reimagines these environments as sites of both domination and defiance.
Interactive Workshop
13:30–15:00
Auditing Workshop: Queering the Algorithms
A hands-on session exploring practical approaches to auditing AI systems for identity-related biases and harms.
Code of Conduct
Please read the Queer in AI code of conduct, which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants. Queer in AI adheres to Queer in AI Anti-harassment policy. Any participant who experiences harassment or hostile behavior may contact the EurIPS exec team, or contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.
Queer in AI @ EurIPS 2025 Organizers
Pranav A (he/they): Pranav is a PhD student at University of Hamburg and a core organizer of Queer in AI. His research focuses on developing inclusive policies and queer advocacy.
Alex Markham (they/them): Alex is a postdoc in the Dept. of Mathematical Sciences at the University of Copenhagen. Their research focuses on causal machine learning, including discovery, inference, and representation learning—it ranges from foundational work intersecting combinatorics and algebraic statistics, to developing new deep generative models, to applications in neuroimaging and single-cell transcriptomics.
Alissa Valentine (she/they): Alissa is a Postdoc at Copenhagen University. Their work aims to understand if it’s possible to equitably use machine learning and NLP for psychiatric risk prediction and classification with Danish registry data. Their previous work aimed to quantify diagnosis bias in psychiatry, and using NLP methods to detect clinician bias in the clinical notes of emergency department psychiatric patients in NYC.
Michelle Lin (she/her): Michelle is a Research Assistant and Masters student at the University of Montreal and Mila - Quebec AI Institute. Her research uses deep learning, remote sensing, and computer vision for climate change mitigation applications. At Queer in AI, she helps organize workshops and events.
Beckett L (he/him): Beckett works on compliance investigations for an international NGO, ensuring tomorrow’s tech meets the needs of some of society’s most vulnerable groups. He previously worked in ML research and safety.
Eshaan T (he/him): Eshaant is a masters student in University of Copenhagen. His research interest is to develop multilingual culturally inclusive and transparent NLP systems.
Contact
Email us at queer-in-ai-eurips@googlegroups.com

