Q2 2024 Awareness Days Calendar
By Sally Pritchett
CEO
Download your free Awareness Days Calendar for Q2 2024.
Don’t miss an important date in the second quarter of the year with our downloadable calendar that includes key employee wellbeing, sustainability and environmental, and diversity and inclusion awareness days.
These are all important topics that should be high on the agenda for every business. Our Q2 2024 Awareness Days Calendar can help you:
- Plan your employee wellbeing programmes.
- Raise the profile of your sustainability activities.
- Keep your diversity and inclusion engagement programmes on track.
- Effectively engage your employees with important conversations.
- Recognise events that are important to colleagues and customers.
If you’re looking for support to make the most of these opportunities, get in touch to see how we can help.
Download our Key Awareness Days Calendar for Q2 2024
5 trends that we expect will shape workplace culture in 2024
By Sally Pritchett
CEO
Curious about the evolving workplace culture in 2024? Explore these five key employee priorities.
Workplace culture has a profound impact on employee engagement, productivity, and morale, so organisations must keep up with what is important to employees to help nurture a positive environment. With 45% of UK employees saying a great culture is the most important factor when looking for a new job, what should employers be looking out for in 2024?
1. Flexibility will continue to be key
The pandemic changed the way many of us work, driving a surge in the number of employees working from home. In 2023, some companies continued with fully remote working, whereas others have started to implement return-to-office policies and hybrid work options. But what could happen in 2024?
We expect to see more of a shift towards employees wanting flexibility, rather than just the opportunity to work from home. With 71% of workers saying a flexible working pattern is important to them when considering a new role, this suggests employees are looking for flexibility around arrangements such as start and finish times or where they work from too, to help them achieve a better work-life balance.
2. Prioritising employee mental health and wellbeing
We believe nurturing a healthy culture that supports employee mental health and wellbeing is one of the biggest areas of importance for 2024. In 2020-2021 alone, .
Although the effect of not prioritising employee wellbeing on productivity is clear, nurturing a healthy workplace culture is about much more than just the financial impact. In a workplace where wellbeing is prioritised, there is likely to be higher employee morale, reduced turnover, and increased job satisfaction.
3. The demand for sustainability
With Gen Z joining the workforce, they are beginning to influence workplace culture. By 2025, Gen Z will account for 27% of the workforce, so employers will need to start listening to what’s important to them if they wish to attract and retain the next generation of talent.
With 50% of Gen Zs saying they are pushing their employer to drive change on environmental issues, it’s clear that sustainability is an important matter that employers need to prioritise as part of their workplace culture initiatives. However, as well as businesses looking at organisational sustainability initiatives, part of building an environmentally conscious workplace culture involves supporting and empowering employees to make better choices in their lives outside of work.
4. Employees want to feel a sense of belonging
This year, diversity, equity and inclusion have become increasingly important in the workplace. In 2024, the focus on DEIB is set to continue. 65% of employees admit they want to feel a strong sense of belonging at work, suggesting employers need to start going further than just having a DEIB policy.
Many employees now want to feel like part of a community at work, form stronger connections with colleagues, and feel like they can be their true selves at work. To help nurture this type of culture, employers need to ensure their DEIB initiatives are accessible, thorough, and most importantly, authentic.
5. The importance of internal communications
Internal communications help keep employees informed, engaged and connected to a business. However, effective internal communications are about more than just sending out a monthly email newsletter to employees. There are many channels that employers can, and should, use to keep employees engaged with the business. This is particularly important for reaching frontline workers, where email isn’t necessarily the best way to communicate with the workforce.
As we approach 2024, nurturing a healthy, safe and thriving workplace culture has never been more important. If you’re looking for some support in developing internal communications strategies or initiatives that engage your workforce, we’re here to help.
Navigating AI Together: Making AI an ally to inclusive communications
By Sally Pritchett
CEO
How can we tackle AI bias for more inclusive and authentic representation?
At our recent ‘Navigating AI Together’ roundtable, we delved into the critical issue of biases within AI and how we can look to overcome the in-built bias and use AI as a tool to foster inclusion and authentic representation.
We were delighted to welcome Ali Fisher, a seasoned expert in fostering sustainable, diverse, equitable, and purpose-driven business practices. With a background including leadership at Unilever and the Dove Self-Esteem Project, Ali brought a wealth of knowledge and experience in the realm of DE&I. Her invaluable insights provided fresh perspectives on navigating AI’s impact on communications.
Unravelling bias in AI
Generative AI offers amazing opportunities for communicators, but its power comes with a challenge: inherent biases. Generative AI has been trained on human-created content, and so has inherited deep-seated bias built in. This bias can, and often does, unintentionally permeate AI-generated content, reinforcing stereotypes and misconceptions.
It’s been well documented and discussed over the last year that generative AI takes bias and stereotyping from bad to worse – with Bloomberg publishing headlines like ‘Humans are biased, generative AI is even worse’. This bias is of course very worrying when we’re also seeing reports that 73% of users globally already say they trust content created by generative AI.
But let’s go back a step. While generative AI may be biased due to the training data that feeds it, what about the conditions under which the AI tools themselves are developed?
The lack of diversity within the tech industry adds complexity. The gender disparity is evident, with only 22% of the UK tech sector and 21% of US computer science degree earners being women. One study showed that code written by women was approved the first time round more often than men’s, but only if the gender of the coder was hidden. If the gender was revealed, the data showed that women coders received 35% more rejections of their code than men.
Race and ethnicity disparities in tech are also concerning. Looking at the US and a report from the McKinsey Institute of Black Economic Mobility, Black people make up 12% of the US workforce but only 8% of employees in tech jobs. That percentage is even smaller further up the corporate ladder, with just 3% of tech executives in the C-suite being Black. It’s believed that the gap will likely widen over the next decade.
Nurturing AI as responsible guides
During our ‘Navigating AI together’ roundtable, an analogy was shared: AI is like a toddler trying to cross a busy road. Like how we wouldn’t allow a toddler to wander into traffic alone, we must hold the hand of AI and safely guide it.
We need to understand the EDI landscape thoroughly first, becoming adept guides before we can expect AI to generate outputs that are genuinely inclusive and authentically representative. As humans, we need to be responsible AI users, always giving a guiding hand. The first step to making AI an ally to inclusive communications is self-reflection.
Navigating our human bias
We’re human, and we’re fallible, and it is important to remember that in the context of EDI.
In one study, researchers observed 9-month-old babies, evenly divided between Black and white infants. They were all equally exposed to both Black and white adults, all unknown to them. The white babies consistently gravitated toward the white adults, while the Black infants showed a preference for the Black adults. This inclination toward familiarity emerged as early as nine months, suggesting an inherent comfort with those we perceive as similar.
As humans, we tend to categorise. We employ schemas and, yes, stereotypes as well. It’s a coping mechanism and our brain’s attempt to simplify the barrage of information we encounter daily. Yet, this simplification comes with a call for heightened awareness. We need to consciously slow down, be vigilant and actively recognize these tendencies within ourselves.
Increasing awareness of our unconscious biases
Unconscious bias refers to the automatic attitudes and stereotypes that influence our judgments and actions without our conscious awareness. Shaped by our experiences and societal influences, these biases impact how we view others.
If you’re considering using AI within your communications, then you must understand what your own unconscious biases are. The Harvard IATs – Implicit Association Tests – are a useful tool to help you begin to do this. Set up by a collaboration of US researchers in 1998 Project Implicit aims to collect data on our biases as we learn about ourselves. We’d recommended picking one identity characteristic you think you don’t carry bias on and one you think you do – and see how it plays out.
Exploring bias in generative AI
Moving on from understanding why generative AI contains bias and recognising how our biases influence our perceptions, let’s shift our focus to examining the actual AI outputs. You likely have already encountered biased outputs from AI, but in our session, we made several comparisons between the results of Google’s image search results algorithm and the outputs from generative AI tools ChatGPT and Midjourney.
Let’s start with a familiar scenario: the image of a courier. When you think of a courier – the person who delivers your Amazon packages – what’s the immediate mental picture that springs to mind?
A quick Google image search result shows a courier as a man carrying a box, often with a van. This representation is the outcome of the content humans have uploaded – it’s not a product of machine learning.
Now, let’s compare it to what AI, drawing from its training data, perceives as a courier’s life.
When we prompted ChatGPT to describe a day in the life of a courier, it conjured a narrative around a character named Jake.
Similarly, looking at Midjourney’s output, we have images suggesting men with boxes and motorbikes as representations of couriers.
Over the course of the roundtable, we shared and discussed many examples showing the bias of AI. To get a better understanding of this, we recommended watching Amy Web’s presentation at the Nordic Business Forum. Amy revealed how AI mirrors human biases. From CEOs to tampons, AI struggled.
It’s safe to say that AI does not challenge the perception of who a person could be. It often reflects society’s most ingrained stereotypes at us and fails to accurately reflect a range of EDI characteristics that humans have.
AI and authentic representation
There are only four EDI identity characteristics that we see or perceive easily – tone of voice, mannerisms, attire, and skin colour. Everything else requires more information from the individual. We can’t accurately assume someone’s age, gender, sexual orientation, race or ethnicity. We can’t assume whether someone has a disability or not.
So how does AI fare when it comes to navigating these visible and invisible EDI characteristics?
If you ask Midjourney to show you construction workers, you’ll likely get something like this, with a clear lack of visible diversity among the four images.
We then asked Midjourney to depict construction workers with a disability. The generated images were all very similar, with three of the four depicting the construction worker as a wheelchair user.
We then asked Midjourney to depict LGBTQIA+ construction workers. This output really shows the propensity of AI to stereotype.
When it comes to minority groups, AI seems to at best lean on lazy stereotypes, and at worst create an offensive parody of reality. These results show how important it is for us to be hyper-aware of EDI within our communications when using AI, and to hold the hand of that toddler crossing the road tightly!
How to make AI an ally to inclusive communications
As the human communicators guiding the hand of AI, reducing our personal bias has to be the first step:
- Engage in the Harvard IATs to heighten awareness of your unconscious biases
- Be prepared to get it wrong and learn from your mistakes
- Evaluate the diversity in your social and professional circle
- Challenge yourself to culture-add, not culture-fit
- Practice active listening, valuing others’ perspectives over your own voice
With that foundation in place, our top tips for reducing AI bias are:
- Craft prompts carefully to guide unbiased outcomes
- Offer contextual details for to help AI better understand your expectations and requirements
- Fact check for genuine and authentic representation in all AI-generated content
- Offer corrective feedback to steer AI responses towards inclusivity
- Develop ethical guidelines for all AI users and undertake through training
Reach out to us or Ali for a deeper conversation on how you can cultivate a culture that embraces and understands the value of DEIB within your organisation.
To join our upcoming ‘Navigating AI Together’ session in 2024, please send an email over to hello@somethingbig.co.uk to stay in the loop.