Queer in AI
NAIAC Briefing

Authors: Organizers of QueerInAI, Arjun Subramonian

Editors: Sorelle Friedler, Serena Oduro, Anaelia Ovalle, William Agnew, Jon Freeman

If artificial intelligence (AI) is not designed with queer harms (e.g., violence, stigma, discrimination) in mind, AI only stands to reproduce these harms, posing risks to the civil rights, civil liberties, and opportunities of LGBTQIA+ individuals.

Large language models are trained on data that: (1) contain queerphobic hate speech; (2) lack queer-affirmative language and representation of diverse genders & pronouns; and (3) are stripped of references to LGBTQIA+ identities. Hence, such models regurgitate stereotypes and harmful narratives about queer people, contributing to their misgendering, alienation, and erasure.

In addition, content moderation AI often fails to flag queerphobic hate speech, yet incorrectly classifies queer content as harmful and censors it; for example, queer sexual education content and coming-out statements like "I'm queer" are often marked as inappropriate or "toxic" and automatically removed. Queer people simultaneously face hypervisibility and privacy violations, e.g., through outing via location data and monitoring on dating and social apps. Furthermore, AI has given a dangerous veneer of legitimacy to physiognomy and phrenology, including using computer vision to identify queer people and infer gender from faces.

Situating these harms in U.S. policy, government collection of sex and gender data often accepts only male or female options; moreover, trans and non-binary people face barriers to updating their gender information across government agencies. If agencies apply AI analytics to their sex and gender data, they risk: (1) erasing the needs of gender minorities, and (2) classifying trans people as anomalies due to inconsistencies in their data, and consequently, e.g., denying them health insurance and subjecting them to increased police brutality.

Furthermore, biometrics assume that gender expression is immutable, and hence could work poorly for trans and non-binary people who physically transition. Therefore, the deployment of biometrics to verify identity or detect fraud: (1) can out trans people and cause them gender dysphoria; (2) incorrectly classify gender minorities as security risks, subjecting them to police violence; and (3) discriminate against gender minorities trying to enter the U.S. or access essential health, employment, and housing services.

Queer in AI is disappointed that the NAIAC’s Year 1 report was limited in directly responding to the concerns repeatedly raised by civil rights groups about equity and discriminatory AI. We are also disappointed about the NAIAC’s centering of a standards-based approach (i.e., the NIST Risk Management Framework) over a rights-based approach (i.e., the Blueprint for an AI Bill of Rights), furthering the lack of enforceability of accountability frameworks for technology. Moreover, we are disappointed in the NAIAC’s failure to convene the law enforcement subcommittee before the end of its first year.

LGBTQIA+ people face real, consequential harms today which the NAIAC must urgently address. Advancing justice is a prerequisite for, and not in conflict with, advancing AI. We urge the NAIAC to endorse and have its recommendations be guided by the AI Bill of Rights.  We further stress that the law enforcement subcommittee must take a rights-based approach.

Furthermore, we advise the NAIAC to make the following recommendations:

(1) Improve LGBTQIA+ representation in Science, Technology, Engineering, Mathematics (STEM):

  • Government agencies such as the National Science Foundation (NSF) must collect voluntary sexual orientation and gender identity (SOGI) data about AI researchers to understand and improve LGBTQIA+ representation.

  • The NSF must further fund research on: (1) AI harms specifically against LGBTQIA+ people; (2)  quantitative and qualitative mitigation measures for such harms; and (3) corresponding gaps in technopolicy.

(2) Collect sexual orientation and gender identity (SOGI) data and measure biases:

  • Government agencies must expand their collection of SOGI data for AI subjects (i.e., individuals affected by AI) to be more inclusive and comprehensive. They must also design clear, specific, repeatable fairness metrics that are contextualized within the social realities and historical marginalization of LGBTQIA+ people.

    • This is critical to concretely measure biases and discrimination against LGBTQIA+ people, to illuminate that these issues exist and devise interventions.

    • The collection and usage of SOGI data must employ meaningful and affirmative consent and state-of-the-art privacy preservation measures, as with other potentially sensitive demographic data. 

(3) Redline pseudoscientific uses of AI:

  • Government agencies, at all costs, should not deploy and should explicitly advise against pseudoscientific uses of AI that fundamentally cannot work, e.g., emotion detection and gender recognition.

    • Such applications, which make problematic assumptions about normative body presentation, inevitably lead to discrimination against LGBTQIA+ people (e.g., in law enforcement) and reinforce cis and heteronormativity.

(4) Engage LGBTQIA+ participation:

  • Government agencies must engage in public consultation and practice stakeholder engagement with diverse, intersectionality oppressed LGBTQIA+ communities throughout the design, development, and deployment of AI.

    • Seemingly benign measures to ensure, for example, child safety, can have unintended adverse effects on LGBTQIA+ youth because many states deem queerness itself to be inappropriate for children.

  • Importantly, LGBTQIA+ people should have the right to refuse certain AI at any part of the development process. Queer people should further have processes for reporting, mitigation, and redress of harms.

Queer in AI thanks the NAIAC for inviting us to this session and listening to our briefing. You can read our full statement at: https://tinyurl.com/qai-naiac. We encourage the NAIAC to engage the public further. We close with the following questions:

1) Why did the NAIAC choose not to explicitly endorse the principles outlined in the AI Bill of Rights? Are there principles in the AI Bill of Rights that the committee disagreed with?

2) Is the NAIAC currently in the process of writing another report? How will it differ from the Year 1 Report?