Skip to main content

FunTimes Magazine

How AI Can Promote Inclusivity in Hiring Practices within the Black Community

Aug 18, 2023 02:00PM ● By The Conversation via Reuters Connect

Photo by Ketut Subiyanto


Hiring is normally featured as a prime example of algorithmic bias. This is where a tendency to favor some groups over others becomes accidentally fixed in an AI system designed to perform a specific task.


There are countless stories about this. Perhaps the best-known example is when Amazon tried to use AI in recruitment. In this case, CVs were used as the data to train, or improve, this AI.


Since most of the CVs were from men, the AI learned to filter out anything associated with women, such as being the president of the women’s chess club or a graduate from a women’s college. Needless to say that Amazon did not end up using the system more widely.


Similarly, the practice of filming video interviews and then using an AI to analyze them for a candidate’s suitability is regularly criticized for its potential to produce biased outcomes. Yet proponents of AI in hiring suggest that it makes hiring processes fairer and more transparent by reducing human biases. This raises a question: is AI used in hiring inevitably reproducing bias, or could it make hiring fairer?


From a technical perspective, algorithmic bias refers to errors that lead to unequal outcomes for different groups. However, rather than seeing algorithmic bias as an error, it can also be seen as a function of society. AI is often based on data drawn from the real world and these datasets reflect society.


For example, if women of color are underrepresented in datasets, facial recognition software has a higher failure rate when identifying women with darker skin tones. Similarly, for video interviews, there is concern that tone of voice, accent, or gender- and race-specific language patterns may influence assessments.


Read also:

5 tips for women to negotiate a higher salary

5 tips for women to negotiate a higher salary

In 2022, women earned 82% of what men earned. The wage gap for Black and Hispanic women is even higher — these groups made 70% and 65%, respectively, of what white men made. Read More » 

 



Another example is that AI might learn, based on the data, that people called “Mark” do better than people named “Mary” and are thus ranked higher. Existing biases in society are reflected in and amplified through data.


Of course, data is not the only way in which AI-supported hiring might be biased. While designing AI draws on the expertise of a range of people such as data scientists and experts in machine learning (where an AI system can be trained to improve at what it does), programmers, HR professionals, recruiters, industrial and organizational psychologists, and hiring managers, it is often claimed that only 12% of machine learning researchers are women. This raises concerns that the group of people designing these technologies is rather narrow.


Machine learning processes can be biased too. For instance, a company that uses data to help companies hire programmers found that a strong predictor for good coding skills was frequenting a particular Japanese cartoon website. Hypothetically, if you wanted to hire programmers and use such data in machine learning, an AI might then suggest targeting individuals who studied programming at university, have “programmer” in their current job title, and like Japanese cartoons. While the first two criteria are job requirements, the final one is not required to perform the job and therefore should not be used. As such, the design of AI in hiring technologies requires careful consideration if we are aiming to create algorithms that support inclusion.


Impact assessments and AI audits that check systematically for discriminatory effects are crucial to ensure that AI in hiring is not perpetuating biases. The findings can then be used to tweak and adapt the technology to ensure that such biases do not reoccur.


The providers of hiring technologies have developed different tools such as auditing to check outcomes against protected characteristics or monitoring for discrimination by identifying masculine and feminine words. As such, audits can be a useful tool to evaluate if hiring technologies produce biased outcomes and to rectify that.


So is using AI in hiring leading inevitably to discrimination? In my recent article, I showed that if AI is used in a naive way, without implementing safeguards to avoid algorithmic bias, then the technology will repeat and amplify biases that exist in society and potentially also create new biases that did not exist before.


However, if implemented with consideration for inclusion in the underlying data, in the designs adopted, and in how decisions are taken, AI-supported hiring might be a tool to create more inclusion.


AI-supported hiring does not mean that the final hiring decisions are or should be left to algorithms. Such technologies can be used to filter candidates, but the final hiring decisions rests with humans. Therefore, hiring can be improved if AI is implemented with attention to diversity and inclusion. But if the final hiring decision is made by a hiring manager who is not aware of how to create an inclusive environment, bias can creep back in.


Read also:
Lack of diversity in clinical trials is leaving women and patients of color behind and harming the future of  medicine

Lack of diversity in clinical trials is leaving women and patients of color behind and harming the future of medicine

A whole range of treatments – from drugs to testing – could be much more effective if they were designed to work with many different kinds of bodies, not just some abstract, generic human. Read More » 

 

5 WAYS TO MANAGE STRESS

5 WAYS TO MANAGE STRESS

As the demands of work and life continue to increase, stress levels for many people are rising. Read More »