What Is "Automating Anti-Blackness" Saying?

In "Automating Anti-Blackness", Ruha Benjamin explains how stereotypes are not only ingrained in social systems but also encoded into technology. 

Benjamin argues that technology often inherits these rigid, discriminatory patterns, leading to "discriminatory design". For example:

  • Employers using credit scores to assess candidates may inadvertently favour certain racial or socioeconomic groups.

  • Algorithms may reinforce biases by showing tailored ads based on stereotypical assumptions.

  • Automated risk assessment tools can reflect existing racial biases in sentencing and parole decisions.

  • Digital surveillance might lead to biased decisions on where to allocate resources, often disadvantaging marginalised communities.

A main idea is that technology's "default settings" can seem more fair or unbiased than they really are. Since decisions are made by algorithms or data-driven tools, people often feel less responsible for the outcomes, which makes it harder to fix biased results.

My Thoughts and How It Applies to My Project

This extract emphasises how design choices in technology can either perpetuate or challenge systemic biases. As I am working on my project on digital comfort, this idea aligns with my goal to create digital experiences that are inclusive and comfortable for all users.

When designing interfaces, I need to be aware of stereotypes in visual representation, language, and user journey mapping. For example, not assuming that certain colours, icons, or interactions suit all users universally.

This extract encourages me to focus on how comfort varies for different groups, such as neurodivergent users or those with varying digital literacy levels. My project could incorporate customisable elements that allow users to adjust the interface to suit their needs.

My project aims to encourage an inclusive environment where users can feel understood and supported, regardless of their background, abilities, or experiences. By applying the insights from this extract, I hope to build digital experiences that are genuinely comfortable for all users.


Bechmann, A. and Stærmose, A. (2021) 'Gendering Algorithms in Social Media', ACM SIGKDD Explorations Newsletter. doi: 10.1145/3468507.3468512.


Man, G.-M. (2020) 'The Implications of Using Stereotypes', Land Forces Academy Review, 25(4), pp. 331–336. Available at: https://www.researchgate.net/publication/347826501_The_Implications_of_Using_Stereotypes (Accessed: [25/02/2025])


Analysis and Reflection on "Automating Anti-Blackness" by Ruha Benjamin

Understanding Stereotypes: Definition and Impact

What is a Stereotype?
A stereotype is a widely held but oversimplified belief or image of a particular group of people. It involves generalising characteristics, behaviours, or preferences based on limited information, often leading to misconceptions and biases.

How are stereotypes used in technology, social media, and product design?

  • In technology, algorithms may use historical data that contains past biases, such as biased recruitment tools that are more likely to choose candidates from specific demographics.

  • On social media, stereotypes can manifest through targeted content that reinforces prejudices or echo chamber effects.

  • In product design stereotypes may influence how user personas are created, often leading to designs that cater only for the "average" user, which excludes users with different needs.

Why are stereotypes harmful?
Stereotypes can:

  • Limit opportunities by reinforcing biases in employment, education, and media representation.

  • Perpetuate discrimination, especially when integrated into automated systems.

  • Harm individual interests, as people aren’t judged by their unique qualities but by generalised assumptions.

Potential harms in employment, policing, and public policy:

  • Automated hiring tools may discriminate against marginalised groups, which then reduses diversity and further perpetuates inequality.

  • Predictive policing software might target specific communities that could lead to over-policing and a cycle of criminalisation in communities.

  • Biased data can skew policy decisions, impacting resource distribution and social support systems.

Is Technology a Neutral Tool?

No, technology is not inherently neutral. While tools themselves do not hold biases, the design, data, and algorithms that make them function as they do, reflect human decisions. These decisions can be shaped by conscious or unconscious biases, and then influence how technology operates within society.

For example, a facial recognition system might work perfectly for light-skinned individuals but misidentify people of colour, not because the technology itself is biased, but because the data used to train it lacked diversity.