Take Your Ethics to Work Day

In the foreground a person is walking and carrying a pink bag. A rainbow flag sticks out of the bag. In the background a robotic arm holding a knife is standing in a puddle of pink liquid

Imagine being fired for being good at your job. What seems like an absurd scenario for bus drivers, dog walkers or PhD students has been the reality for AI ethics experts at large tech companies, most prominently among them Timnit Gebru. When ethical concerns clash with business strategy, ethics comes up short. And with the immense increase in AI start-ups and the rush to integrate AI into existing companies, AI ethics is either ignored or left to the developers who build the final product. In the absence of an AI ethics department, how do you bring your values to work?

This is the question that Mona Sloane and Janina Zakrzewski examine in their recent FAccT paper. They interviewed 64 founders and employees of German AI startups about how they put their values into practice at work. Sloane and Zakrzewski propose four questions for thinking about this issue: First, what ethical principles are there? Second, what needs are served by the application of a specific principle? Third, what story is being told about the application of the principle? And lastly, what are the materials the principle shows up in (e.g. in objects, processes, institutions, or rules)?

Looking at AI ethics papers it is easy to get the impression that ethical problems get solved at the level of data and algorithms. Countless toolkits and debiasing methods are offering a way to a purportedly beneficial product regardless of social and historical context. This reductionist view has been widely criticized by intersectional scholars (see our previous blog post here). Problems of power and oppression can not be taken outside of the living breathing world that they happen in, no matter if the oppression happens at the hand of a single person or is enacted via a machine-learning model. Sloane and Zakrzewski’s study exemplifies this point.

One of their most surprising findings is how neatly the practice of AI ethics in German start-ups ties in with historical institutions of worker participation. Despite the lack of unionization in start-ups, founders and employees strongly emphasized the ethical principle of co-determination: Employees should have a say in company policies, including what the company produces and who it does business with. In one company this materialized as employees refusing to provide services to the military-industrial complex. Employees created an ethics working group and came up with a system to supply their own “ethics scoring” which would then be used to accept or reject potential clients.

Cultural practices of co-determination in the workplace in Germany go as far back as the mid-1800s and have been codified in many laws. While they have historically been used to improve working conditions and pay, in the context of AI start-ups they have moved from being a forum for industrial action to becoming the arena for the relatively new topic of AI ethics. In their interviews, the founders of start-ups often justified the application of the principle of co-determination with the need to acquire and keep qualified workers while not being able to provide high pay or job security: Co-determination is seen as essential for a positive working environment - a kind of non-monetary compensation.

These intuitions about what makes a good company do not derive from papers about debiasing metrics and cleaning data sets. The unspoken consensus about the ethical principle of co-determination is never spelled out explicitly, nevertheless, it is the guiding force for making AI ethics happen right now. This is the power of historical and cultural context. Examining data and algorithms are steps that can follow.

Unsurprisingly for everyone involved with diversity and inclusion efforts, Sloane and Zakrzewski find that ethics initiatives and ethics-themed working groups in AI start-ups are often started and maintained by representatives of groups that are most vulnerable to harms from AI systems: women and queer people. Bringing your ethics to work is both a burden and a survival strategy when you struggle with oppression in all areas of your life. The opportunities will look different depending on our historical and cultural frameworks. At the end of the day, it will always be single people rising, connecting, organizing, pointing out the red flags, and saying: This is not okay. This needs to change.


A picture of a white person wearing a blue and white patterned shirt

This post was written by Sabine Weber. Sabine is a queer person who just finished their PhD at the University of Edinburgh. They are interested in multilingual NLP, AI ethics, science communication and art. They organized Queer in AI socials and were one of the Social Chairs at NAACL 2021. You can find them on twitter as @multilingual_s

Previous
Previous

3 Things that AI Ethics Toolkits Get Wrong

Next
Next

Putting Trans into Translation