What can the U.S. government learn about participation in AI from Queer in AI?

In this blog post, we reflect on what the award-winning Queer in AI FAccT paper has to say about participatory governance in AI, and what lessons U.S. government leaders, agencies, and stakeholders might take away from the paper. While our blog post may be relevant to other governing bodies, we are writing from a U.S. social and historical context.

These questions and a condensed version of these responses appeared in Circle Back, a private newsletter by the Public Technology Leadership Collaborative. Credit for these questions goes to Melinda Sebastian and Charley Johnson. You can subscribe to the Circle Back newsletter by emailing Melinda and Charley at circleback@datasociety.net. The header image is from PCWorld.

What might it look like for a government agency to put the core principles of a community-led approach into practice?

Government agencies should engage in intersectional inquiry and praxis, and be encouraged and afforded space to do so by higher-ups. Critical inquiry (i.e., scrutinizing, challenging, and seeking remedies for societal issues arising from injustices) serves to examine connections between social structures and their origins to make sense of inequalities; complementarily, praxis manifests the the knowledge gained during critical inquiry as action for social justice. Engagement with intersectionality intrinsically requires Queer in AI’s core tenets: public participation, community-led AI research & governance, and decentralization [0].

Inquiry –– critical research into just AI

Government agencies should critically research topics in just AI that have been deemed a priority by marginalized communities. Thereby, agencies grant these communities power to lead just AI research agendas. For example, Queer in AI has called for research on:

1) contextual, privacy-respecting, and socially & historically-grounded measurements of AI bias and harms against vulnerable communities [1];

2) previously-undocumented AI harms against marginalized groups, and gaps in technopolicy with respect to the protection of these groups [1];

3) mechanisms for marginalized groups to effectively report and seek redress for harms perpetrated by government AI [2], affirmatively and meaningfully consent to it, or refuse it entirely [1];

4) methods to actionably operationalize public participation and intersectionality throughout the AI lifecycle; inter alia.

This research should fill gaps in academic work on these topics with respect to the special needs and constraints of government settings. This research should further employ intellectual vigilance (described below), and be encouraged and incentivized by government higher-ups.

Praxis –– intellectual vigilance

Government agencies, throughout AI design, development/procurement, and governance, should remain intellectually vigilant (à la Patricia Hill Collins) of how their AI can reproduce social inequalities, and how the epistemologies underlying current AI practices enable oppression. Being intellectually vigilant can involve: 1) reflexively examining whose knowledge is centered (and whose is subjugated) when building AI, 2) valorizing the knowledge and feedback of vulnerable communities, 3) reading intersectionality literature, 4) learning about the social and historical context of marginalized groups through a critical lens, 5) understanding how anti-discrimination laws (which often inform responsible AI) can perpetuate oppression, etc. [2]. For example, like Queer in AI, government agencies can:

1) regularly consult community-led organizations (e.g., Queer in AI) to learn how they are advancing cultures of participation, while building deep, long-term, and rewarding relationships with these organizations;

2) run focus groups and surveys of individuals harmed by government AI, and better yet, allow and fund community-led audits thereof (e.g., bias bounties run by and for LGBTQIA+ people to uncover queer harms in government AI [3]);

3) create and participate in activities (e.g., checklists, guiding questions) that instigate and incentivize reflexive thinking about how their AI projects intersect with power and inequities and fall short of advancing justice (e.g., [2]);

4) broadly and accessibly share critical reflections and methodological learnings, to advocate for intersectional AI praxis;

5) organize decentralized, hierarchy-less workshops on the structural inequities and AI harms faced by vulnerable communities, where they (and the public) learn from expert speakers;

6) organize decentralized, hiearchy-less book clubs to read and discuss critical theory. 

Government agencies have historically excluded and disempowered community-led organizations,  rather than fostering continual dialogue and relationships. For example, it is unclear where in long bureaucratic processes actual decision-making power lies, workshops and testimonies are invite-only, and it takes a long time before impacted communities are made aware of the effects of beneficial/detrimental AI policies.

Furthermore, decentralization and a lack of hierarchy enable government agency employees to: 1) discuss topics that are particularly relevant or interesting to them; 2) contribute what/when they can based on their circumstances and lived experiences; and 3) dialogue about challenging topics with minimal interference from power dynamics.

[0] https://dl.acm.org/doi/10.1145/3593013.3594134

[1] https://www.queerinai.com/naiac-briefing 

[2] https://dl.acm.org/doi/abs/10.1145/3600211.3604705  

[3] https://dl.acm.org/doi/abs/10.1145/3600211.3604682

What did Queer in AI learn about the challenges of implementing a community-led approach, and what lessons might government stakeholders take away?

Hierarchy: Decentralization and a lack of hierarchy are critical to minimizing power distance and distinctions between individuals in a community, so that everyone is encouraged to participate in collective processes while having the flexibility to choose what/when they contribute based on their circumstances. However, it is difficult to fully minimize distinctions between different individuals in a community. For example, in Queer in AI, some individuals have more experience being part of the organization (e.g., have belonged to Queer in AI for a longer time, been a more active organizer), and they inevitably steer the direction of the organization (e.g., through mentorship of new organizers, preservation of institutional precedent). Moreover, experienced organizers are entrusted to make critical decisions involving sensitive data or privacy. Simultaneously, new Queer in AI organizers often find it difficult to adjust to the organization’s lack of clear structure.

Accessibility: Participation in a community is often hampered by the nature of compensation, meeting modalities and times, and laws, among other factors. For example, Queer in AI organizers are volunteers, and thus must have external income and support structures (especially for mental health) that allow them to perform free labor involving queerphobia. Furthermore, Queer in AI’s organizing meetings are virtual, in English, and often in European and American timezones; similarly, workshops are in English and co-located with AI  conferences, which are exclusively held in the Global North. Queer in AI’s meetings thus require access to a computer and internet, and inhibit participation from those in the majority world; this is only amplified by laws in the majority world that criminalize queerness.

Finances: A community’s funding structure can undermine trust in the community’s mission and activities. Queer in AI relies in large part on sponsorships from defense and large tech companies, which are complicit in oppression and genocide globally; this has called into question Queer in AI’s independence and community-led nature. Discussions to continue vs. terminate relationships with sponsors are challenging, as Queer in AI relies on sponsorships to send honoraria, scholarships, and travel grants to LGBTQIA+ people across the world. In addition, wiring money internationally presents numerous barriers (e.g., PayPal restricts payment to certain countries), as well as legal and security risks.

While Queer in AI’s tensions and challenges remain unresolved, our paper exemplifies a first step towards tackling them: critical self-reflection. In particular, government stakeholders should inspect their hierarchical structure, accessibility, and finances, and regularly answer questions such as:

1) Hierarchy: How are government agencies actively seeking to minimize the power distance between themselves and marginalized communities? Which forms of hierarchy (intentional and unintentional) inhibit the freedom and flexibility of participation of government employees and stakeholders? Which forms of hierarchy are required (e.g., for critical decision-making)? How does hierarchy subjugate certain knowledge forms, and how can space be created for new knowledge to be generated and valorized?

2) Accessibility: How do meeting modalities and other participation infrastructures intersect with systemic inequalities to exclude certain voices? In other words, who is missing from the table, and why? 

3) Finances: How does the financial structure of government agencies undermine the trust of marginalized communities in their mission and activities? What tensions inhibit relying on more ethical funding sources?

It may not always be possible to fully minimize power differences or include everyone in pluralistic communities with different structures, but government agencies should nonetheless build rapport with and dynamically respond to the needs of different communities.

What can government leaders take away from the paper?

Queer in AI demonstrates the potential of community-led participatory methods and intersectional praxis in AI, while also providing challenges, case studies, and nuanced insights to government agencies developing and using participatory methods. Queer in AI wants to help government agencies advance justice and cultures of participation in AI, but agencies must build deep, long-term, and rewarding relationships with us.


A photo of a brown person with black hair, purple eyeliner, and a blue shirt smiling, facing towards the camera.

This post was written by Arjun Subramonian. Arjun is a brown queer neurodivergent PhD student at the University of California, Los Angeles. Their research focuses on critical and inclusive graph machine learning and natural language processing, including fairness, bias, ethics, and integrating queer perspectives. They have helped organize Queer in AI workshops, socials, and mentoring. You can find them on Twitter as @arjunsubgraph.

Previous
Previous

What is Bias?

Next
Next

Say My Name, Say My Name