ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Inclusivity: AI's blind spot

Kristin Zwez at Nscale explains how a lack of diversity can put a damper on innovation

 

The AI industry is experiencing rapid growth, yet its workforce remains predominantly male, with women comprising only 22% of AI professionals globally and less than 14% in senior executive roles. Additionally, racial and ethnic minorities are underrepresented, making up just 9% of the tech workforce in the United States. Beyond gender and ethnicity, factors such as age, education, and social class also contribute to the underrepresentation within the industry.

 

For AI to achieve genuine innovation, the industry must prioritise a more holistic approach to inclusion, one that encompasses all dimensions of diversity, including race, gender, age, social class, and education.

 

For AI to be transformative, it must reflect the diversity of the society it serves. As AI continues to reshape industries, it is essential that those developing these technologies come from a broad range of backgrounds and experiences. Only by embracing this diversity can AI solutions be designed that address the needs of a broader user base and avoid reinforcing existing inequalities.

 

 

The impact of hiring filters on AI talent

One of the biggest challenges to building diverse AI teams lies within hiring practices. Many organisations still rely on traditional hiring filters, such as educational requirements, years of experience, and location, which can unintentionally exclude diverse talent from entering the AI field.  Educational requirements can disproportionately favour candidates from more affluent socio-economic backgrounds, while location-based preferences often limit access to AI opportunities for people outside of tech hubs like Silicon Valley.

 

Additionally, the growing use of AI-powered hiring filters can further exacerbate these issues. While designed to streamline recruitment, AI algorithms often reinforce existing biases in the data they are trained on. If an AI hiring system is trained on historical hiring data that reflects biases against certain demographic groups, it will replicate and potentially amplify those biases, unintentionally screening out qualified candidates from underrepresented backgrounds. This can have a significant impact on diversity in AI roles, as these systems may fail to recognise the potential of candidates with non-traditional backgrounds, or those who do not fit the "ideal" candidate profile created by the algorithm.

 

Furthermore, many AI roles prioritise specific qualifications, such as degrees from prestigious universities, which often overlook self-taught professionals and candidates with non-traditional educational backgrounds. These hiring practices screen out talented individuals who could contribute unique perspectives and expertise to AI development.

 

Companies should reconsider their hiring practices. Rather than overemphasising formal education or conventional career trajectories, they should prioritise practical skills and hands-on experience in AI development. Additionally, implementing blind recruitment methods that remove identifying information, such as names, genders, and educational backgrounds, can help minimise bias and create a more inclusive recruitment process.

 

 

Supporting underrepresented voices in AI

Mentorship plays a vital role in fostering inclusion within the AI field. With a lack of role models from underrepresented groups in leadership positions, many aspiring professionals from diverse backgrounds may not fully understand the potential career paths or the unique challenges they could face. Mentorship programmes tailored to underrepresented groups can help bridge this gap, offering valuable guidance, inspiration, and support for those aiming to build careers in AI.

 

Effective mentorship goes beyond just technical expertise. It’s about guiding individuals from diverse backgrounds through the often complex cultural and professional barriers they may encounter in their careers. This support helps them gain confidence, build networks, and move from aspiration to action.

 

The impact of mentorship is profound. Mentors from similar backgrounds can provide valuable insights into overcoming obstacles that might otherwise remain hidden. Research shows that employees with mentors are more likely to be promoted, and in the AI industry, mentorship can be the key to unlocking the careers of underrepresented individuals who would otherwise face barriers to advancement. Mentorship not only benefits individuals but also strengthens the AI workforce by bringing in diverse perspectives that can lead to more innovative and ethical solutions.

 

 

Building inclusion into AI from the start

AI development has the potential to impact every part of society, from healthcare and education to transportation and finance. However, the current challenges in AI often arise from biased data and algorithms that reflect existing societal inequalities. If inclusion isn’t embedded from the very beginning of AI development, the technology risks perpetuating and even exacerbating these biases.

 

For AI systems to be inclusive, the teams that create them must reflect the diversity of society. Diverse teams are more adept at recognising challenges, identifying biases in data sets, and designing solutions that are both equitable and effective. Inclusive teams are also more likely to develop AI systems that address the needs of a wide range of users, rather than focusing on a narrow demographic.

 

Building inclusion into AI doesn’t stop with diverse teams; it also requires inclusive data collection and design practices. AI systems should be trained on data that reflects the full spectrum of human experience. This means not only ensuring that data sets include a diverse range of demographics but also recognising the importance of cultural context and local needs when designing AI models. This approach ensures that AI technologies are ethical, responsible, and accessible to all.

 

When choosing an AI technology partner, it’s crucial to assess their commitment to responsible AI use, not just their technical capabilities. Organisations should seek partners who enforce strict governance to ensure compliance with ethical standards and regulations.

 

It’s important to evaluate how an AI technology partner governs platform access and usage to ensure it is not used for harmful or unethical applications. A responsible partner prioritises data privacy, security, and ethical AI deployment, fostering trust and ensuring appropriate use. Strong governance practices, such as access control, monitoring, and accountability, help ensure that AI technologies are deployed with oversight, preventing misuse.

 

With AI’s growing influence across industries, it is imperative that the technology remains aligned with ethical standards and creates a positive societal impact. As regulations like the EU AI Act evolve to address transparency, fairness, and accountability, a responsible AI partner not only complies with current laws but anticipates future requirements. Ethical AI practices, from data privacy to regulatory compliance, ensure AI remains a trusted and positive force.

 

 

Inclusion in AI without the big budget

Inclusion in AI is often seen as something that can only be achieved by large tech companies with substantial resources. However, mid-sized companies can also play a pivotal role in driving inclusive AI development. These companies have the flexibility to implement inclusive practices quickly and can create cultures that foster diverse ideas and perspectives without needing to match the scale of big tech firms.

 

Mid-sized companies can offer agile, inclusive policies, such as flexible work arrangements, inclusive hiring practices, and mentorship programmes. These companies can also promote a culture of inclusion by encouraging diversity in leadership and ensuring that employees from all backgrounds have a voice in decision-making processes. This makes them a powerful force in driving inclusive AI development, setting an example for larger companies to follow.

 

Research shows that small and mid-sized companies often play a critical role in driving innovation, and this is especially true in AI. By prioritising inclusion in their AI development processes, mid-sized companies can attract a broader pool of diverse talent and create products that reflect a wider range of experiences and needs.

 

 

Towards a more inclusive AI ecosystem

For the AI industry to succeed, it must prioritise diversity and inclusion at every stage, from hiring practices to mentorship, team dynamics, and the development of the technology itself. True inclusion goes beyond meeting quotas; it’s about cultivating an environment where individuals, regardless of gender, ethnicity, age, class, or education, can contribute their unique perspectives and skills. A genuinely inclusive AI workforce has the potential to tackle complex global challenges and improve lives across society.

 

The AI industry wields significant influence over the future, and with that influence comes the responsibility to ensure that the technology we create is fair, inclusive, and beneficial to all. By embedding inclusion into AI development now, we can build a future where AI serves everyone.

 


 

Kristin Zwez is SVP New Markets at Nscale

 

Main image courtesy of iStockPhoto.com and wildpixel

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543