Article
Do Androids dream of the Equality Act?
Artificial intelligence (AI) and the so-called 'rise of the robots' is making some people question whether in the future we will have an employer (or even a job) at all.
There is no denying that the world of work is changing. Technologies not dreamed of just 10 short years ago are shifting our perception of the nature of the employer/employee relationship. Artificial intelligence (AI) and the so-called 'rise of the robots' is making some people question whether in the future we will have an employer (or even a job) at all.
Against this backdrop I was struck by news from the US that my next interview could be conducted by a robot. I was initially sceptical. I saw the recent episode of The Apprentice and the robots there struggled with the most basic of yoga moves. It seemed like a leap to expect them to carry out the sort of rigorous selection and assessment processes carried out by employers every day. It turned out I was half-right - the robots are less Robocop and more accurately described as "algorithms" (for which we are all probably grateful). Nonetheless I was intrigued by what appeared to be a novel use of AI in the recruitment and selection of job candidates.
The algorithm in question is used to screen video interviews and thereby eliminate the need for a human being to conduct the initial "sifting" process. It is undoubtedly clever. The idea is that an employer will sit down with psychologists and data scientists from the developer firm to come up with 6 or 7 questions designed to test candidates against the skills and qualities required for a particular role. Candidates are then asked to submit a video interview in which they answer those questions.
So far, so (fairly) standard. Many employers ask candidates to submit video interviews as a means of streamlining the sifting process, particularly where there is a high volume of applications. However, those videos are usually then evaluated by a human being (and ideally one trained in unconscious bias). Not here. Instead the submitted video interviews are analysed by an algorithm based on how the employer's top performing employees responded to the same questions. Interviews are then ranked not only on the words that the candidates say, but also on their tone of voice, rate of speech, complexity of vocabulary, body language and "micro-expressions". A key selling point is that the algorithm can detect physical "tells" signalling incongruity in a candidate's response.
The software has laudable goals. It aims to screen out inconsistency in recruitment decisions and to be less subjective than humans, after all algorithms don't suffer from the Friday afternoon slump or the Monday morning blues. It is also designed to ignore a candidate's age, gender or race, thus reducing conscious or unconscious bias. And of course automating such an expensive and time consuming process has the potential to make significant financial savings.
It all sounds like a "good thing ". But is it?
Under the Equality Act a potential employer is prohibited from discriminating against a person in the arrangements it makes for deciding to whom to offer employment. There's little recent case law on this point, but "arrangements " have been held to include the kind of questions that are asked at interview, alongside the arrangements made for the interview itself. A video interview, and subsequent analysis by algorithm, would therefore comprise an "arrangement" for the purposes of the legislation.
And here's the rub. While it *may* be possible to design an algorithm that is race, age and gender blind (more of this later), the nature of a screening process which ranks physical responses in the form of speech patterns, body language and "micro-expressions", is likely to put some disabled candidates at a significant disadvantage. To give some obvious examples, a candidate with a facial disfigurement may struggle to display 'congruous' micro-expressions, while those candidates with mobility issues may find displaying the 'appropriate' body language more challenging. Candidates with invisible disabilities may also be disadvantaged. For example, research suggests that people with depression may have different speech patterns to those without the condition. It is hard to see how such a vast multitude of potential variables, covering the full range of disabilities, could be accommodated without a hefty (and likely commercially unviable) investment.
Of course an employer should make reasonable adjustments under these circumstances. For disabled candidates that may well involve an offer of a face to face interview. However, that proposition relies on a candidate being aware, for example, that their speech pattern or body language differs from the so called 'norm' as a result of their condition. In turn, that proposition itself relies on there being a 'norm', something which psychologists have debated for decades. The best intentions sometimes have unintended consequences.
But perhaps there are less obvious risks too. The algorithm in question ranks candidates against an employer's "top performers". Imagine a scenario where the culture within an organisation enables only white males to succeed. When you recruit in the image of those top performers you embed discrimination rather than eliminate it. You create a homogeny of workers which actively inhibits diversity, with the potential for a corresponding negative effect on the bottom line. This is not the fault of the algorithm, after all it has done what it has been asked to do. But it takes a human being to identify these broader patterns and act as the counterweight.
All of this reminds me of the common refrain as applied to databases: "rubbish in, rubbish out ". The concept as applied to AI remains true. Algorithms are designed by humans, and humans tend to have biases. To give a further example, I recently read about another piece of software which analyses job adverts to replace words that would discourage women or minorities from applying for certain roles. According to the developer, if a business wants to attract female applicants then the words "fast-paced environment " should be replaced with "productive environment ", and the word "analyst " should be avoided as conjuring up images of a white man. From a brief (and admittedly unscientific) study around the office, not one of the women I asked thought that these terms would put them off applying for a job. The software may have embedded the developer's own stereotypes.
But it's early days I hear you cry, AI will catch up. Only it doesn't seem like that is going to happen any time soon. Research from the University of Bath has found that computers can learn to be biased even without express human input. It seems that Androids do not dream of the Equality Act.
The answer is clear. In the race to automate we must be alive to the fact that AI is not created in a bubble but is instead grounded in human bias. We should not sit back and allow technology to be done to us but must instead adopt a collaborative approach in which developers, business, and HR professionals work together to shape technology for the future world of work. If we do otherwise we risk reintroducing discrimination through the back door.
First published by Employment Solicitors Magazine on 13 November 2017.
CECI NE CONSTITUE PAS UN AVIS JURIDIQUE. L'information qui est présentée dans le site Web sous quelque forme que ce soit est fournie à titre informatif uniquement. Elle ne constitue pas un avis juridique et ne devrait pas être interprétée comme tel. Aucun utilisateur ne devrait prendre ou négliger de prendre des décisions en se fiant uniquement à ces renseignements, ni ignorer les conseils juridiques d'un professionnel ou tarder à consulter un professionnel sur la base de ce qu'il a lu dans ce site Web. Les professionnels de Gowling WLG seront heureux de discuter avec l'utilisateur des différentes options possibles concernant certaines questions juridiques précises.