Four Unethical Uses Of AI In Recruitment
Four Unethical Uses Of AI In Recruitment
Artificial intelligence (AI) is disrupting every industry, and the recruitment market is no exception. By lowering the cost of prediction, AI offers cheaper, faster, more efficient ways to connect people to jobs, as well as the promise of unlocking human potential. This is a big opportunity. In a world where most people are unhappy with their careers and many organizations complain about their talent gaps - for example, a recent ManpowerGroup report noted that 40% of global companies are experiencing critical talent shortages, the highest figure in a decade - technology can help us bridge the gap between supply and demand and make the job market less inefficient, just like dating apps have managed in the market of love.
However, as with any technological innovation it is important to understand the ethical implications of using AI for attracting and selecting employees. Even if AI can improve our ability to match people to the right jobs - and, if we are looking for the kind of evidence we have historically demanded from traditional hiring tools (i.e., peer-reviewed scientific journal articles), the jury is still out - we need to ensure that the use of AI in recruitment is ethical. This goes beyond laws and regulations, such as the European Commission's GDPR, and begs the question of whether AI recruitment tools have the potential to harm or unfairly disadvantage candidates, especially when they may not be aware of it.
To that end, let us consider four aspects of AI recruitment that have the potential to downgradethe ethical standards of scientifically defensible selection methods (e.g., well designed structured interviews, valid psychometric assessments, and job simulations).
1. Cyber-snooping: Traditional recruitment tools were invented to compensate for our historical inability to collect sufficient data on candidates' history and behaviors to predict their future performance. Unless the person already worked for you, you had to infer their potential fit to a new role without much past performance data (and even today many employers lack objective performance data on their incumbents). This is no longer a problem in the digital age, as over half of the world's population spends much of their waking hours online - in the US and the UK, more time than asleep - leaving behind a vast trail of rich data on their individual preferences, values, and abilities. The only challenge is to code such data into relevant talent signals, but scientific research demonstrates how this can be done.
For instance, machine-learning algorithms can be used to predict candidates' intelligence and personality - including their dark side - from their Facebook profiles. Likewise, AI has been effectively used to translate our Twitter footprint into a fairly comprehensive personality profile, because our choice of words reflect who we are, including our talent and career potential. Note there are already free tools available, including IBM Watson's personality insights, to decode someone's personality from any representative text sample. In fact, even mining people's Spotify playlist or Netflix choices can tell us a great deal about who they are, certainly more than we can gather from just looking at their resumes.
However, there is a difference between what we can and should know about people, and that difference is a question of ethics. Although the price of any free online service is people's personal data, it is one thing to use these data for targeted advertising (and the eternal promise of more relevant ads), and another very different thing to use it to make consequential employment and career decisions about users who were totally unaware of this possibility when they signed up for such services. Even when it's perfectly legal to deploy AI to understand people and predict their performance, such as in the case of mining employees' e-mail data (both content and context or meta-data), it is questionable whether such data would be used to enhance employees' performance - for example, by diagnosing training and development needs, or moving them to more suitable roles - rather than to help managers advance their own political agendas. In fact, managers may have good intentions and be ethical, and yet employees' mere perceptions of this technology could render it harmful - even if it provided a more accurate picture of their performance and productivity than the subjective (and biased) opinion of their manager, and one could only imagine it would. One way to alleviate this concern would be to allow candidates - both internal and external - to opt in, giving active consent and permission to having their data examined.
2. Withholding feedback: Historically, recruitment tools did not provide much feedback or information to candidates on their profile or how their results connect to the outcome of the recruitment decision, except when the outcome was positive. While this is disappointing - why should we deprive candidates from gaining useful career feedback and understanding themselves better? - it is also true that the high-touch nature of traditional recruitment tools makes giving feedback more time-consuming and expensive.
This is no longer the case with AI-based tools, which can synthesize vast datasets to provide automatic and personalized feedback at scale, and in a confidential manner. For example, imagine if the innovative tools described in point (1) above provided candidates with instant personal feedback on what the algorithms are inferring about them: e.g., "because you listen to so much Justin Bieber, our AI recruiter inferred that you are not sufficiently intellectual for this role, which requires critical thinking and curiosity"; "because you watch so many obscure documentaries, our machine-learning algorithms predict that you will find this job rather boring and look for another job as soon as possible"; or "given that you tweet so much about yourself, and that you post so many selfies on Facebook, our algorithms have reasons to believe that you will not be a good team player, so we will pass on you".
Note that algorithms will never be perfect, but revealing their underlying logic will make even imperfect algorithms more transparent and ethical, not least because they would give people the opportunity to change their habits to be more eligible for a given role. This notion is well captured in the dystopian Black Mirror episode Nosedive, where the protagonist is made awarethat in order to afford a better home she would need to befriend higher-status people on social media. Equally, as much as the realization that companies and countries are implementing AI-based credit scoring may seem creepy, such systems would be more ethical if (a) we understand the criteria, and (b) we are willingly able to change our scores (through the right behaviors). It is also noteworthy that people are generally disproportionately shocked by AI-based tools than the biased, political, and idiosyncratic decision making rules that AI is trying to upgrade. The next point expands on this.
3. Predicting biased outcomes: There is no doubt that the biggest potential advantage of AI recruitment tools is to minimize our reliance on human bias and intuition (which is rather biased). In an age of ubiquitous data and cheap prediction, there should be no excuses for playing it by ear, so when hiring managers or recruiters note that they know talent when they see it, we should demand some evidence. Unfortunately, AI can also be deployed to predict the wrong outcomes. For example, when machine-learning algorithms mine digital interview data to predict whether recruiters will want to hire the candidate, they simply perpetuate human biases. In other words, training AI models to emulate human preferences or biased decision-making, will only replicate the shortcomings of our own mind. At the same time, there is clearly a more ethical alternative, which is to deploy algorithms that can actually predict real performance differences between candidates. In fact, unlike the human mind, AI can be trained to focus on relevant indicators of potential while ignoring false signals or factors that augment social injustice. For example, you can train AI models to be gender, age, or race blind in their assessment of interview data, but a human interviewer can never be taught to ignore that the person sitting in front of him is female, old, or black (for a smart discussion of this issue see this blog by my colleague Ben Taylor). In short, faster and cheaper prediction does not solve the fundamental challenge in recruitment, which is to have solid criterion or outcome data to predict: it is only when you have solid measures of human performance that you can build meaningful models to predict performance and quantify a person's suitability to a role or job.
4. Black-box selection: A final ethical consideration regarding AI recruitment tools is the degree to which they explain why a candidate has potential for a given job (or not). It is not enough to predict future job performance - recruitment tools should also help us understand the basis of such prediction, which means having a rationale for selection or rejecting a candidate. For example, when voice-scraping algorithms identify a connection between certain physical properties of speech and job performance, it is important to understand what the basis for such a connection is. More specifically, is there a causal link between a person's voice and their performance; is this relationship caused by other (more relevant) variables; is the person able to control or modify their voice to improve their performance; and would hiring people with such voice have adverse impact against certain groups? To use a more extreme example, suppose that we looked at candidates' genetic data (DNA) to predict their potential suitability for a job or role. On the one hand, there are reasons to expect DNA data to be predictive of career potential, as all of the major facets of talent and job-related competencies (e.g., EQ, grit, IQ, entrepreneurship, and leadership) have a biological basis. On the other hand, even if such basis could be clearly identified at the genetic level, it would be ethically questionable to base recruitment decisions on such data, not least because of our inability to understand the exact processes by which such biological predispositions translate into different levels of performance, and the degree to which individuals are free to escape their genetic fate. Thus, even if black-box selection tools were legal, they may result in unethical hiring decisions and fail to explain why certain candidates are deemed more talented, which would also limit our ability to train and develop people's talents.
To conclude, there is no question that AI will enable us to improve our ability to match people to more suitable jobs and careers, which would have enormous personal, social, and economic benefits... so long as we deploy these new technologies in an ethical way. Giving people the right to choose to have their data mined, ensuring that they actually benefit from it (even when they don't get a job), and minimizing the risk of unfair decisions and harm to the candidate, would be a good starting point.