Our non-binary socio-technical community garden

ChatGPT generates stories with protagonists who use neopronouns, but Vagrant Gautam was there first. In the summer of 2022, Vagrant presented xir work at the Queer in AI panel on non-binary representation in language technologies. “This is the first time that I’ve been able to see a generated story that has a character that uses my pronouns.”, xe said about the system that xe built as part of a class project. “A lot of us focus on the bad stuff; the bad ways in which people are operationalizing gender and I find that that kind of work is just personally very exhausting. It is my entire life, why do I also want to do it for work? [...] I would prefer to do more hopeful building.”

I meet Vagrant on a Zoom call on a Friday evening to talk about what brought xem into the field of natural language processing and AI ethics, and what’s next for xem. Xe is a first-year Ph.D. student at Saarland University in the South-West of Germany, and I am a mere 5-hour train ride to the east from xem, which is a pleasant change from coordinating time zones. I ask xem about hopeful building and if xyr outlook on the topic had changed since the 2022 panel. “It becomes more complicated to define [hopeful building] because I feel so many of the foundations are rocky and unethical.”, Vagrant says “In theory, models like DALL-E are cool and fun and interesting and maybe even hopeful. But you can’t see that in isolation. You can’t look at the output and not think of how a lot of the art was stolen and not credited and artists are understandably upset about that. It is tricky to define hopeful tech because for me hopeful tech needs to be integrated with the community at every level. It needs to be anti-power. It needs to discard this traditional way of looking at things where the machine learning engineer overlords control the system. They know what it does and they have all the power over it, and then there are the hapless end-users who have to deal with whatever the machine learning engineer decides to do. I think hopeful systems are about involving the community and owning things that we build together, almost like a community garden. […] A lot of NLP work doesn’t hit that mark.”

Vagrant's way into the field of NLP was a straightforward path, starting as early as high school. “I had never done any computer science in high school so I had no clue.”, xe laughs. “But the motivation was that I was studying French in school as a second language and our teacher would tell us to not use Google Translate for our homework and we would obviously all use Google Translate for our homework. But I found out pretty quickly after a few months of taking classes (and I had no background in French at all) that I could spot errors in the translations. And I felt like this was pretty bad. [...] If I, a measly human being, can learn the rules of French enough to not make really basic mistakes in a few months, then why does this big company build a tool that is this bad? And so I thought: This seems like a solvable problem. And so I should grow up and solve it. I had this really bright-eyed bushy-tailed kind of approach: I will solve Google Translate, knowing nothing about linguistics or computer science.” But Vagrant was not deterred by finding out later that machine translation is actually a hard problem. “By that time also neural models were coming along so there were less obvious things that I could catch. So the beginning felt different from what I thought it was going to be, but it is still interesting.”

There was no one defining moment that sparked Vagrant's involvement in AI ethics, but rather a steady trajectory toward the topic. “I started working on things and reading literature and slowly becoming more radicalized by the things that I saw AI being used for.”, Vagrant says. “I was reading from other fields like human-computer interaction and sociology and I was realizing that there are other important things: the social aspect of socio-technical systems. Just feeling like there is this whole other world, and I would like to have a more holistic view of whatever I do, even if it remains in the field of AI and natural language processing. I don’t think I am going to do a Ph.D. thesis on neopronouns but it still matters to me that I am thinking of the impact of everything that I do and that I also make other people think about the ethical implications of what they do and how to build socio-technical systems in context and do things like participatory research.”

After working in industry for some time Vagrant now enjoys xyr return to academia, especially because xyr University has a strong computer science background that lets xem get into the weeds of complicated topics as part of xyr required course work. Also, more practical considerations played into the decision: “I really like coding and I really like teaching and I would like to teach at the university level, which means I need a Ph.D. to be able to do that. The research stuff I am not 100 percent sure if I can do. And that’s the whole point of the Ph.D., it teaches you to do that.”, Vagrant says. A thing xe struggles with in academia is the strict separation of scientific fields. “I fundamentally believe that interdisciplinarity is the way to go. Especially because I so firmly believe in building socio-technical systems. I don’t think that you can exist in isolation and just do things a certain way or that empirical science is the only science and empirical ways of knowing are the only ways of knowing. But I find that my opinion is very much a minority opinion where I am. I think a lot of people are very snooty about social sciences. I hate all of that. I think there is a lot that we can give to each other across disciplines. I know this sounds cheesy, but we would build a better world if we all worked together.” This approach also shows up in the research topics that Vagrants goes after. “Leaderboard chasing gets boring after a while. It is much more interesting to read creative papers that apply methods from different places. It’s not even about the results but about a new way of looking at something, it's a new question to ask.”

To find a way to pursue both technically focused and AI ethics work, Vagrant builds upon xyr connections to AI ethics researchers at other Universities, many of whom xe met via Queer in AI. “I have a side project that I am working on with some folks at UCLA”, Vagrant says. “We’re looking at intersectionality in ML fairness papers, doing a critical survey of them […]. I’ve been doing a lot of reading on intersectionality for it and I don’t think I can justify that for my actual Ph.D. My Ph.D. is expected to be much more technical. But for now, I am happy with this balance.” Vagrant is still in the process of finding a topic that will carry xem through the coming years of the Ph.D. while also powering through the course load for xyr master's degree. Maybe it will be question answering? Or language modeling, building upon xyr experience in industry? “I am working on finding problems that excite me, both technically as well as in the impact downstream: more reliable systems, more fair systems, systems that are more modest in the claims that they make.” Wherever xyr interests may take xem, thinking about systems as living artefacts that stand in the context of society as a whole is the foundation Vagrant builds on. “Watch this space!”, xe says, laughing. Hopefully, more hopeful building lies ahead. 


You can find more of Vagrant’s research and writing here: https://dippedrusk.com/


A picture of a white person wearing a blue and white patterned shirt

This post was written by Sabine Weber. Sabine is a queer person who just finished their PhD at the University of Edinburgh. They are interested in multilingual NLP, AI ethics, science communication and art. They organized Queer in AI socials and were one of the Social Chairs at NAACL 2021. You can find them on twitter as @multilingual_s

Previous
Previous

ChatGPT calls me a dyke

Next
Next

The opening paragraph of your paper sucks, and here is why.