By Sally Pritchett
CEO

How can we tackle AI bias for more inclusive and authentic representation?

At our recent ‘Navigating AI Together’ roundtable, we delved into the critical issue of biases within AI and how we can look to overcome the in-built bias and use AI as a tool to foster inclusion and authentic representation.

We were delighted to welcome Ali Fisher, a seasoned expert in fostering sustainable, diverse, equitable, and purpose-driven business practices. With a background including leadership at Unilever and the Dove Self-Esteem Project, Ali brought a wealth of knowledge and experience in the realm of DE&I. Her invaluable insights provided fresh perspectives on navigating AI’s impact on communications.

Unravelling bias in AI

Generative AI offers amazing opportunities for communicators, but its power comes with a challenge: inherent biases. Generative AI has been trained on human-created content, and so has inherited deep-seated bias built in. This bias can, and often does, unintentionally permeate AI-generated content, reinforcing stereotypes and misconceptions.

It’s been well documented and discussed over the last year that generative AI takes bias and stereotyping from bad to worse – with Bloomberg publishing headlines like ‘Humans are biased, generative AI is even worse’. This bias is of course very worrying when we’re also seeing reports that 73% of users globally already say they trust content created by generative AI.

But let’s go back a step. While generative AI may be biased due to the training data that feeds it, what about the conditions under which the AI tools themselves are developed?

The lack of diversity within the tech industry adds complexity. The gender disparity is evident, with only 22% of the UK tech sector and 21% of US computer science degree earners being women. One study showed that code written by women was approved the first time round more often than men’s, but only if the gender of the coder was hidden. If the gender was revealed, the data showed that women coders received 35% more rejections of their code than men.

Race and ethnicity disparities in tech are also concerning. Looking at the US and a report from the McKinsey Institute of Black Economic Mobility, Black people make up 12% of the US workforce but only 8% of employees in tech jobs. That percentage is even smaller further up the corporate ladder, with just 3% of tech executives in the C-suite being Black. It’s believed that the gap will likely widen over the next decade.

Nurturing AI as responsible guides

During our ‘Navigating AI together’ roundtable, an analogy was shared: AI is like a toddler trying to cross a busy road. Like how we wouldn’t allow a toddler to wander into traffic alone, we must hold the hand of AI and safely guide it.

We need to understand the EDI landscape thoroughly first, becoming adept guides before we can expect AI to generate outputs that are genuinely inclusive and authentically representative. As humans, we need to be responsible AI users, always giving a guiding hand. The first step to making AI an ally to inclusive communications is self-reflection.

Navigating our human bias

We’re human, and we’re fallible, and it is important to remember that in the context of EDI.

In one study, researchers observed 9-month-old babies, evenly divided between Black and white infants. They were all equally exposed to both Black and white adults, all unknown to them. The white babies consistently gravitated toward the white adults, while the Black infants showed a preference for the Black adults. This inclination toward familiarity emerged as early as nine months, suggesting an inherent comfort with those we perceive as similar.

As humans, we tend to categorise. We employ schemas and, yes, stereotypes as well. It’s a coping mechanism and our brain’s attempt to simplify the barrage of information we encounter daily. Yet, this simplification comes with a call for heightened awareness. We need to consciously slow down, be vigilant and actively recognize these tendencies within ourselves.

Increasing awareness of our unconscious biases

Unconscious bias refers to the automatic attitudes and stereotypes that influence our judgments and actions without our conscious awareness. Shaped by our experiences and societal influences, these biases impact how we view others.

If you’re considering using AI within your communications, then you must understand what your own unconscious biases are. The Harvard IATs – Implicit Association Tests – are a useful tool to help you begin to do this.  Set up by a collaboration of US researchers in 1998 Project Implicit aims to collect data on our biases as we learn about ourselves.  We’d recommended picking one identity characteristic you think you don’t carry bias on and one you think you do – and see how it plays out.

Exploring bias in generative AI

Moving on from understanding why generative AI contains bias and recognising how our biases influence our perceptions, let’s shift our focus to examining the actual AI outputs. You likely have already encountered biased outputs from AI, but in our session, we made several comparisons between the results of Google’s image search results algorithm and the outputs from generative AI tools ChatGPT and Midjourney.

Let’s start with a familiar scenario: the image of a courier. When you think of a courier – the person who delivers your Amazon packages – what’s the immediate mental picture that springs to mind?

A quick Google image search result shows a courier as a man carrying a box, often with a van. This representation is the outcome of the content humans have uploaded – it’s not a product of machine learning.

 

Now, let’s compare it to what AI, drawing from its training data, perceives as a courier’s life.

When we prompted ChatGPT to describe a day in the life of a courier, it conjured a narrative around a character named Jake.

 

 

Similarly, looking at Midjourney’s output, we have images suggesting men with boxes and motorbikes as representations of couriers.

 

Over the course of the roundtable, we shared and discussed many examples showing the bias of AI. To get a better understanding of this, we recommended watching Amy Web’s presentation at the Nordic Business Forum. Amy revealed how AI mirrors human biases. From CEOs to tampons, AI struggled.

It’s safe to say that AI does not challenge the perception of who a person could be. It often reflects society’s most ingrained stereotypes at us and fails to accurately reflect a range of EDI characteristics that humans have.

AI and authentic representation

There are only four EDI identity characteristics that we see or perceive easily – tone of voice, mannerisms, attire, and skin colour. Everything else requires more information from the individual. We can’t accurately assume someone’s age, gender, sexual orientation, race or ethnicity.  We can’t assume whether someone has a disability or not.

So how does AI fare when it comes to navigating these visible and invisible EDI characteristics?

If you ask Midjourney to show you construction workers, you’ll likely get something like this, with a clear lack of visible diversity among the four images.

 

 

We then asked Midjourney to depict construction workers with a disability. The generated images were all very similar, with three of the four depicting the construction worker as a wheelchair user.

 

 

We then asked Midjourney to depict LGBTQIA+ construction workers. This output really shows the propensity of AI to stereotype.

 

When it comes to minority groups, AI seems to at best lean on lazy stereotypes, and at worst create an offensive parody of reality. These results show how important it is for us to be hyper-aware of EDI within our communications when using AI, and to hold the hand of that toddler crossing the road tightly!

How to make AI an ally to inclusive communications

As the human communicators guiding the hand of AI, reducing our personal bias has to be the first step:

  • Engage in the Harvard IATs to heighten awareness of your unconscious biases
  • Be prepared to get it wrong and learn from your mistakes
  • Evaluate the diversity in your social and professional circle
  • Challenge yourself to culture-add, not culture-fit
  • Practice active listening, valuing others’ perspectives over your own voice

With that foundation in place, our top tips for reducing AI bias are:

  • Craft prompts carefully to guide unbiased outcomes
  • Offer contextual details for to help AI better understand your expectations and requirements
  • Fact check for genuine and authentic representation in all AI-generated content
  • Offer corrective feedback to steer AI responses towards inclusivity
  • Develop ethical guidelines for all AI users and undertake through training

Reach out to us or Ali for a deeper conversation on how you can cultivate a culture that embraces and understands the value of DEIB within your organisation.

To join our upcoming ‘Navigating AI Together’ session in 2024, please send an email over to hello@somethingbig.co.uk to stay in the loop.

Get in touch