Bias Bounties have a Power Problem

Two pirate figures stand between a screen that shows a pixelated picture of the earth

More often than not machine learning models are released into an unsuspecting public, and harms are only addressed when there is a critical mass of public outrage (remember for example Microsoft's chatbot fiasco Tay). Enter bias bounties, a name evocative of pirates sailing the seven seas of cyberspace, hunting down unruly systems and harvesting treasures as they go. Similar to bug bounties, where volunteers debug code for the promise of reward money, companies have recently started to encourage people to help them discover harms created by machine learning models. Another word used for them is “red-teaming challenge”, like the one that was hosted at DEF CON 2023.

On the surface this seems like a great idea from the side of AI companies: Instead of living in fear of a potential shitstorm, why not engage the communities that are most likely to be negatively affected and improve the product before it can cause too much damage? In the best case this can lead to co-creation of better systems. But does reality hold up to these aspirations?

At the craft session at the ACM FAccT Conference 2022 Queer in AI researchers conducted a hands-on workshop to hash out how queer participants of bias bounties felt about the endeavour and what they wish for the future of the format. In small discussion groups people tackled topics like harms experienced by the queer community, control and power during bias bounties, accountability and the limitations of bias bounties as tools for positive change.

Points of criticism were plentiful. Just as everywhere else, in the world of bias bounties queer people are in a minority, and harms specifically affecting the queer community take a backseat when it comes to challenge objectives formulated by companies. In the worst case, bringing up queerphobic behaviour of a system means effectively outing oneself. If personal information about bounty participants is published, this means being exposed to possible harassment and worse. In addition the non-confidential treatment of gender information by bounty hunt organisers might put trans and non-binary people in danger.

But there is a larger theme at play when the queer community and large AI companies interact. Motivations and values of these two groups are often not aligned. Sure, companies want systems that work well and that don’t open them up for legal repercussions that come as push-back against biassed systems. But at the end of the day a company's prime objective is to make profit, even if that goes against the interest of a specific group of users.

Certain systems, for example systems that predict a person’s gender or sexual orientation based on pictures or text, are harmful to queer people. The answer is that such systems should not exist in the first place - an answer that a company that produces such systems would never accept. No amount of participation can improve that. So what is a way forward towards real co-creation of systems rather than post-hoc participation-washing?

The organisers of the workshop compiled a list of recommendations from the findings of the workshop participants. A wider community should be involved at every step of the machine learning pipeline - starting from the problem formulation, over data collection, algorithm design and testing. Bias bounties should not lie in the hands of single companies or even government lead initiatives, but should rather arise as community efforts. The time and effort of bias bounty volunteers should transform from a resource that is harvested by companies to a resource that is directed towards the community, as a form of mutual aid. A system harms us and we take it into our own hands to document the harm and use this evidence to fight. Rather than waiting for collaboration from companies, affected communities should acknowledge the disparities in power and motivation and take actions into their own hands.

Grassroot organisations like Queer in AI and others can lead the way in this development to establish a neutral third party oversight over AI systems that are deployed into our world in ever larger scales. What is a system that you would love to audit regarding it’s impact on your community? And what do you need to set the wheels in motion?


You can find the full paper with the findings of the workshop here.


A picture of a white person wearing a blue and white patterned shirt

This post was written by Sabine Weber. Sabine is a queer person who finished their PhD at the University of Edinburgh. They are interested in multilingual NLP, AI ethics, science communication and art. They organized Queer in AI socials and are D&I Chair of EACL 2024. You can find them on twitter as @multilingual_s or on their website.

Previous
Previous

Queer = Bad in Automatic Sentiment Analysis

Next
Next

What is Bias?