By Sally Pritchett
CEO

Discover four practical steps to mitigate bias in generative AI and ensure inclusive, authentic representation in your AI-powered communications.

AI offers amazing opportunities for communicators, from writing creative content to bringing to life a visual idea in a matter of seconds. But its power comes with a challenge: inherent biases.  

Generative AI has been trained on human-created content, and so has inherited deep-seated bias built in. This bias can, and often does, unintentionally permeate AI-generated content, reinforcing stereotypes and misconceptions. 

So as the human hands guiding AI, what can we do to help overcome these biases and use AI as a tool to foster inclusion and authentic representation?  

1. Nurturing AI as responsible guides

It’s important that we hold the hand of AI and safely guide it as generative AI learns from the language we use. We need to ensure we understand the EDI landscape ourselves thoroughly first before we can expect AI to generate outputs that are genuinely inclusive and authentically representative.  

2. Navigating our human bias

The second step to making AI an ally to inclusive communications is self-reflection. We’re human, and we’re fallible, and it is important to remember that in the context of EDI. As humans, we do form stereotypes – it’s a coping mechanism and our brain’s attempt to simplify the barrage of information we encounter daily.  

We must remain vigilant – consciously slowing down and actively recognising these stereotypes within ourselves so we do not bring them into our communications with AI.  

3. Increasing awareness of our unconscious biases

Unconscious bias refers to the automatic attitudes and stereotypes that influence our judgments and actions without our conscious awareness. Shaped by our experiences and societal influences, these biases impact how we view others. 

If you’re considering using AI within your communications, then you must understand what your own unconscious biases are. The Harvard IATs – Implicit Association Tests – are a useful tool to help you begin to do this. Set up by a collaboration of US researchers in 1998, Project Implicit aims to collect data on our biases as we learn about ourselves. We’d recommended picking a test identity characteristic you think you don’t carry bias on and one you think you do – and see how it plays out. These tests can help you identify where your unconscious biases could influence AI. 

4. Learning from our mistakes

AI is still a relatively new tool for many of us – we are still learning how to get the best out of ChatGPT or how to write an effective prompt on Midjourney. We are naturally going to make mistakes as we learn how to use different AI platforms. But we must learn from these and identify where we perhaps need to reword a prompt or change the language, we are using to generate more inclusive results. By taking our time to craft prompts carefully to guide unbiased outcomes we can minimise our mistakes and foster greater inclusion.  

But what about if AI makes a mistake and leans on bias or stereotypes? We can help it learn from its mistakes too! By offering corrective feedback, we can help steer AI responses towards being more inclusive. 

Navigating AI together

Our ‘Navigating AI together’ workshop series has been providing a safe and open space for communicators to discuss various aspects of AI. 

This time, recognising the pressing need, we’re focusing on intellectual property and copyright issues. It’s an area that many communicators have been struggling to grapple with, so in our next session, on Friday 15th March, we’re going to delve into it together. 

We are delighted to be welcoming Alex Collinson, from Herrington Carmichael, who specialises in commercial and intellectual property law matters. Alex will lead an insightful discussion covering copyright, brand protection, confidentiality concerns, and real-world cases of AI IP infringement. 

Book your place here. 

Get in touch