When we strip away the negativity of the term unconscious bias, people open up. They start to see all the assumptions we make every day. Many of those assumptions are incredibly helpful and timesaving. But some of them put others in a box and hold them back. Using the term “unconscious bias” shuts the door for many people before they even see what’s in the room. And we want them to see what’s in the room!
Most of the diversity training work we do is for industries with a large front line workforce. We define “front line” pretty broadly: basically anyone in business development/sales; customer service; and marketing is at the front line. These are the folks who interact with clients, consumers, customers, patients, students, guests, travelers, you get the picture…
The front line is largely ignored when it comes to diversity training. After all, they’re often out of the office and unable to attend an hour long or half day unconscious bias training. They may be road warriors, going from sales meeting to sales meeting. Or maybe they’re standing all day interacting with customers with no access to a desktop computer or LMS (learning management system).
The challenge here is that the front line is the face of the brand, and if the front line employee engages in a micro-aggression or act of unconscious bias, the brand’s reputation can be at risk. We believe a proactive diversity training approach is best, one that meets these front line workers where they are - on their smart phones. Situational micro-learning and eLearning is most practical for these workers - short videos and exercises that make sense without overly complicating things. After all, most "diversity training” expresses the concepts of the golden (or platinum) rule: respect, great listening, and avoiding assumptions. In our opinion, communicating these concepts doesn’t require a 4 hour (or even 1 hour) training. We can do it in about 10 minutes.
Unconscious bias training is not rocket science. We find that over-complicating things is a turn-off not only to ourselves but to others, and folks shut down when faced with the prospect of another training. We’re all about the KISS method.
We believe that when front line workers are trained to be more inclusive, that will have a profound ripple effect not only on the corporate culture and employee morale, but on the customer experience. If customers feel like they can truly be themselves without fear of rejection, they will carry themselves with greater dignity and have more loyalty towards your brand. That is priceless. That is our vision.
Note: this is part two of two posts on how artificial intelligence can be used to reduce bias in various Human Resources processes. This article was co-written by Bernadette Smith and Rhodes Perry. You can read part one here.
So how do we prevent AI from perpetuating these biases in Human Resources processes? We offer five key strategies to consider that will help you keep unconscious biases in check. These strategies include:
1) Expose Workplace Bias
2) Train AI to Spot Bias
3) Offer AI Transparency
4) Acknowledge Workplace Culture
5) Monitor & Audit AI
When used properly, these strategies can actually help circumvent bias. After reviewing this section, you will realize that making these careful considerations now will give your organization a competitive advantage over other leaders in your industry.
1) Expose Workplace Bias
As technology advances, AI will take on more sophisticated decisions, like identifying new markets where a company can be profitable, or finding the most qualified candidates for jobs by supporting HR to look beyond dominate groups or traditional referral networks.
Before programming AI to take on these tasks, we must first design algorithms and leverage AI to expose our individual and organizational biases. By taking this action, AI will have the ability to spot bias with respect to organizational decision making, and help redirect us to make more fair and accurate decisions that are in the best interest of the organization’s performance, growth, and profit.
For example, say an organization has historically fallen short of recruiting and hiring qualified women for leadership positions. The senior executive team recognizes this historical trend, and they want to change it to get more women in their executive pipeline. When programmed correctly, AI can help the organization examine past job postings for gender biased language, which may have discouraged some applicants from pursuing these positions.
Updating job postings using gender neutral language can increase the number of women applicants & ultimately have more women in the leadership pipeline. Once your talent pipeline has become more diverse, AI can help track the pattern of hiring decisions made by individual managers, and alert them—and HR leadership —if hidden biases against female candidates (or candidates from other underrepresented groups) arise.
2) Train AI to Spot Bias
In order to train AI to spot bias, your team must first carefully program the algorithms. For the organization committed to recruiting more women for executive positions, programmers must consider how candidates for executive positions are recruited starting with the job announcements.
By programming AI to spot gender biased terms like outspoken and aggressively pursuing opportunities, terms proven to attract male job applicants, and other terms like caring and flexible, which do the opposite, AI can help alert recruiters to their unconscious biases when drafting job announcements.
The initial process of training AI to spot bias currently will require a high level of human intervention. Fortunately, the more your organization invests on the front end to train AI to spot bias, the more sophisticated your HR team will become to avoid replicating unconscious biases for this aspect of your workload.
Similarly, companies should look for specificity in how they program AI to search for new talent as there’s no one-size-fits-all definition of the best engineer. Rather, there is only the best engineer for one particular role or project at one particular time. When carefully programmed, AI is well suited to find the ideal candidate for that role or project.
Offer AI Transparency
Humans create AI, and all humans are prone to bias. Therefore, we must be extremely careful that we are not confirming biases or introducing new ones when training AI, especially when using AI to meet our HR needs. In order to overcome these biases, it’s important to create a system of checks-and-balances with safety standards in mind.
In Cathy O’Neil’s 2016 best-selling book, Weapons of Math Destruction, she says that automotive companies wouldn’t design a car and send it out into the world without knowing whether it’s safe. Safety standards are at the center of a car design, and by the same token, algorithms have to be designed with fairness and legality in mind with standards that are understood by everyone, from the business leader to the people being served.
To try our best at riding algorithms of bias, some call for transparency in how they are constructed. Transparency at its most basic level is knowing whether or not a human or machine is making decisions online. Given the growing influence AI has in how it is shaping our world, it is critical that we know when a machine is making a decision, and to also know the rationale for how it made the decision.
When algorithms affect human rights, public values or public decision-making we need oversight and transparency. Unfortunately most algorithms are opaque for the average outside user to understand how they are constructed. Therefore, some are strongly urging AI programmers to first consider the values and desired results of various AI technologies before designing them to ensure they are in the best interest of the public.
At this point, there is no silver bullet to holding AI accountable to the impact it has on the public. Fortunately, there is some good news with respect to concerns around transparency. A collaboration between Facebook, Google, Microsoft, IBM, and Amazon has established the Partnership on AI. The group was formed to increase the transparency of algorithms and automated processes, ultimately keeping AI in check.
Acknowledge Workplace Culture
While we may have a vision for creating a future workplace culture where we all can show up as our authentic selves, it’s also critical to acknowledge that for most workplaces, we aren’t there yet. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because some may be struggling with balancing the demands of their jobs with the demands of their family.
An organization committed to smart talent management will ask what it is about the demands of senior level positions that make them incompatible with women’s lives, rather than assuming that women simply aren’t qualified for executive level positions. When the leadership team begins asking these questions about their workplace culture, they are then responsible for exploring what they can do to shift it so that their organization doesn’t lose the talent and institutional knowledge of women, or incurs the high costs of replacing them.
When leaders take this responsibility, they can supplement their efforts by applying a second layer of machine learning that looks at its own suggestions and makes further recommendations. For example, AI can be programmed in the following manner: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women to executive level positions, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.
Monitor & Evaluate AI
When correctly programmed, AI excels at alerting us to workplace bias; however, in isolation it cannot eliminate it. In order to reduce workplace bias, it’s up to all of us to pay attention to the existence of bias, leverage AI to spot it, and work together to overcome it.
As we begin to entrust AI with more complex and consequential decisions, it’s critical that we continuously monitor, evaluate, and train AI over the long-term, just as we educate our people, to stay up to speed on the latest changes in our industries to remain relevant and competitive. The most proactive organizations programming AI to do good will be best positioned to leverage AI to help their organization perform well.
For more information about the power and pitfalls of AI, please consider watching a webinar we facilitated earlier this year for International Association for Human Resource Information Management members.
Note: this is part one of two posts on hose artificial intelligence can be used to reduce bias in various Human Resources processes. This article was co-written by Bernadette Smith and Rhodes Perry.
When a workforce is diverse, that talent has a broader understanding of the needs of their diverse clients. Naturally, when an organization better understands the needs of its target market, they can better innovate their products and services – and that leads to an increase in revenue.
According to management consulting company McKinsey & Co, companies that exhibit gender and ethnic diversity are, respectively, 15 percent and 35 percent more likely to perform better than those that don't. Their research shows that organizations with more racial and gender diversity also have better sales revenue, more customers, and higher profits.
Unfortunately, all of us, even the most well-meaning people in Human Resources, are guilty of bias, which negatively affects the creation of a diverse and inclusive workforce. You may have heard the story of a man named Jose, who was having no luck on his job applications. He began applying with the name “Joe” instead, and suddenly started receiving calls.
This bias, called unconscious bias, is so subtle that most of us don’t notice it or catch ourselves. Here are some other common ways this can play out in HR:
• Geography bias (eg: local job candidates receiving preference over non-local job candidates)
• Gender bias (eg: women are given fewer opportunities than men if they have kids then but then are disliked when they are not seen as nurturing)
• Appraisal bias (eg: when the manager compares an employee’s performance to other employees instead of the company standard)
• Association bias (eg: favoring those who went to the same college, are members of the same organization or association, etc)
The great news is that technology, specifically artificial intelligence (AI), offers clients solutions to minimize bias and therefore create a more diverse workforce – and as a result increase revenue. In fact, AI is currently being used within human resources processes to:
· Set hiring priorities (eg: prioritize what positions need to be filled first)
· Suggest hiring trends
· Neutralize resume screening (eg: remove certain affiliations, remove geography)
· Standardize job descriptions (eg: trigger alerts when gendered words such as the masculine word “competitive” are used in job descriptions)
· Assess leaders and potential leaders (eg: identify employees for internal promotions)
· Improve employee retention (eg: suggest which employees are retainable)
· Standardize employee assessments (eg: customize and automate appraisal templates)
· Synthesize performance review data (eg: suggest specific actions per employee as area for improvement)
· Synthesize exit interview data and provide insights on why employees leave
While AI can help reduce unconscious bias and lead to a more diverse workforce, it’s not a panacea. Simply put, AI demands all of us humans to distill the data it uses in its analysis.
In its current form, AI is simply an extension of our existing culture, which is riddled with biases and stereotypes. This means, that as we program AI, and as AI learns from us through our words, data sets, and programming, we run the risk of having machine learning perpetuate our culture’s biases. For example, Google’s translation software converts gender-inclusive pronouns from several languages into male pronouns (he/him, his) when talking about medical doctors, and female pronouns (she, her, hers) when talking about nurses, perpetuating gender-based stereotypes.
This built-in bias can show up in a number of ways in AI HR technology. For example, if only one employee is providing evaluation data that is used to set the standard for performance reviews, then there are not enough perspectives to establish balance and generate non-biased datasets. When a team of people is conducting interviews, they often do not use standardized questions. This can also skew the datasets because there is not enough consistency in the responses to generate unbiased data.
When the datasets have a low volume of responses, they also are inherently more biased because there aren’t as many varied possibilities. Even a company like Walmart, which hires over 1,000 people per day, doesn’t generate a massive supply of data. One thousand people per day is child’s play for machine learning and the results again, can perpetuate any biases that are unconsciously built into the company’s processes.
In part two, we’ll address solutions that can improve AI’s reliability in reducing bias.