Bernadette Smith Bernadette Smith

The Use of Artificial Intelligence to Minimize HR Bias (part 2)

Note: this is part two of two posts on how artificial intelligence can be used to reduce bias in various Human Resources processes. This article was co-written by Bernadette Smith and Rhodes Perry. You can read part one here.

So how do we prevent AI from perpetuating these biases in Human Resources processes? We offer five key strategies to consider that will help you keep unconscious biases in check. These strategies include:

1)   Expose Workplace Bias

2)   Train AI to Spot Bias

3)   Offer AI Transparency

4)   Acknowledge Workplace Culture

5)   Monitor & Audit AI

When used properly, these strategies can actually help circumvent bias. After reviewing this section, you will realize that making these careful considerations now will give your organization a competitive advantage over other leaders in your industry.

1)    Expose Workplace Bias

As technology advances, AI will take on more sophisticated decisions, like identifying new markets where a company can be profitable, or finding the most qualified candidates for jobs by supporting HR to look beyond dominate groups or traditional referral networks.

Before programming AI to take on these tasks, we must first design algorithms and leverage AI to expose our individual and organizational biases. By taking this action, AI will have the ability to spot bias with respect to organizational decision making, and help redirect us to make more fair and accurate decisions that are in the best interest of the organization’s performance, growth, and profit.

For example, say an organization has historically fallen short of recruiting and hiring qualified women for leadership positions. The senior executive team recognizes this historical trend, and they want to change it to get more women in their executive pipeline. When programmed correctly, AI can help the organization examine past job postings for gender biased language, which may have discouraged some applicants from pursuing these positions.

Updating job postings using gender neutral language can increase the number of women applicants & ultimately have more women in the leadership pipeline. Once your talent pipeline has become more diverse, AI can help track the pattern of hiring decisions made by individual managers, and alert them—and HR leadership —if hidden biases against female candidates (or candidates from other underrepresented groups) arise.

2)    Train AI to Spot Bias 

In order to train AI to spot bias, your team must first carefully program the algorithms. For the organization committed to recruiting more women for executive positions, programmers must consider how candidates for executive positions are recruited starting with the job announcements.

By programming AI to spot gender biased terms like outspoken and aggressively pursuing opportunities, terms proven to attract male job applicants, and other terms like caring and flexible, which do the opposite, AI can help alert recruiters to their unconscious biases when drafting job announcements.

The initial process of training AI to spot bias currently will require a high level of human intervention. Fortunately, the more your organization invests on the front end to train AI to spot bias, the more sophisticated your HR team will become to avoid replicating unconscious biases for this aspect of your workload.

Similarly, companies should look for specificity in how they program AI to search for new talent as there’s no one-size-fits-all definition of the best engineer. Rather, there is only the best engineer for one particular role or project at one particular time. When carefully programmed, AI is well suited to find the ideal candidate for that role or project. 

Offer AI Transparency

Humans create AI, and all humans are prone to bias. Therefore, we must be extremely careful that we are not confirming biases or introducing new ones when training AI, especially when using AI to meet our HR needs.  In order to overcome these biases, it’s important to create a system of checks-and-balances with safety standards in mind.

In Cathy O’Neil’s 2016 best-selling book, Weapons of Math Destruction, she says that automotive companies wouldn’t design a car and send it out into the world without knowing whether it’s safe. Safety standards are at the center of a car design, and by the same token, algorithms have to be designed with fairness and legality in mind with standards that are understood by everyone, from the business leader to the people being served. 

To try our best at riding algorithms of bias, some call for transparency in how they are constructed. Transparency at its most basic level is knowing whether or not a human or machine is making decisions online. Given the growing influence AI has in how it is shaping our world, it is critical that we know when a machine is making a decision, and to also know the rationale for how it made the decision.  

When algorithms affect human rights, public values or public decision-making we need oversight and transparency. Unfortunately most algorithms are opaque for the average outside user to understand how they are constructed. Therefore, some are strongly urging AI programmers to first consider the values and desired results of various AI technologies before designing them to ensure they are in the best interest of the public.

At this point, there is no silver bullet to holding AI accountable to the impact it has on the public. Fortunately, there is some good news with respect to concerns around transparency. A collaboration between Facebook, Google, Microsoft, IBM, and Amazon has established the Partnership on AI. The group was formed to increase the transparency of algorithms and automated processes, ultimately keeping AI in check.



Acknowledge Workplace Culture

While we may have a vision for creating a future workplace culture where we all can show up as our authentic selves, it’s also critical to acknowledge that for most workplaces, we aren’t there yet. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because some may be struggling with balancing the demands of their jobs with the demands of their family.  

An organization committed to smart talent management will ask what it is about the demands of senior level positions that make them incompatible with women’s lives, rather than assuming that women simply aren’t qualified for executive level positions. When the leadership team begins asking these questions about their workplace culture, they are then responsible for exploring what they can do to shift it so that their organization doesn’t lose the talent and institutional knowledge of women, or incurs the high costs of replacing them.

When leaders take this responsibility, they can supplement their efforts by applying a second layer of machine learning that looks at its own suggestions and makes further recommendations. For example, AI can be programmed in the following manner: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women to executive level positions, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.



Monitor & Evaluate AI

When correctly programmed, AI excels at alerting us to workplace bias; however, in isolation it cannot eliminate it. In order to reduce workplace bias, it’s up to all of us to pay attention to the existence of bias, leverage AI to spot it, and work together to overcome it.

As we begin to entrust AI with more complex and consequential decisions, it’s critical that we continuously monitor, evaluate, and train AI over the long-term, just as we educate our people, to stay up to speed on the latest changes in our industries to remain relevant and competitive. The most proactive organizations programming AI to do good will be best positioned to leverage AI to help their organization perform well. 

For more information about the power and pitfalls of AI, please consider watching a webinar we facilitated earlier this year for International Association for Human Resource Information Management members.

Read More