Emerce Recruitment: AI tools subject to algorithmic bias

Artificial intelligence will certainly not replace the recruiter, but it can play a role during the recruitment and selection process. Elmira van den Broek will receive her PhD in May with a study into how a large multinational together with an AI supplier develop an AI tool and use it when selecting personnel. She will share her insights next Tuesday at Emerce Recruitment in the Meervaart in Osdorp.

You will be awarded a PhD with research into the use of AI in recruitment. What was the reason for delving into this? I understand recruitment is not your area of ​​expertise.

‘At the KIN Center for Digital Innovation, we are broadly interested in how digital technologies are used in organisations. Because the use of this technology has major consequences for work and decision-making, we have several researchers – including myself – who focus on this. For example, we have researchers looking at the application of AI in the police, radiology, Olympic sports and agriculture.

‘I myself am interested in recruitment and selection processes, because on the one hand we see great promises around AI, such as ‘objectivity’ and ‘honesty’, and on the other hand AI has a major impact on personal opportunities of individuals, such as who is assigned a job. In 2021, the European Commission has therefore classified AI systems used for recruitment and selection as very risky. There is therefore a great need for research into the implications of AI systems in this domain.’

The research is based on a real case: a large multinational that collaborated with an AI supplier.

“Unfortunately I can’t name the organizations because of a nondisclosure agreement. This was an important condition to be able to collect sensitive data. For almost three years I was able to walk with the AI ​​supplier and their largest customer to get a closer look at both the development process and the use of the AI ​​tool. I sat next to developers who designed predictive models, recruiters who selected applicants using the algorithm, and managers who interviewed applicants based on algorithmic insights. As various challenges and sensitive topics were addressed during this period, such as issues of fairness and algorithmic bias, anonymity was necessary to protect the organizations and allow them to participate in the research.’

What should organizations in general pay attention to if they want to use AI? And what are important pitfalls? Is the self-learning algorithm an improvement?

‘My dissertation points to three major lessons for organisations. First of all: the importance of collaboration between developers and domain experts (recruiters and HR professionals). Organizations that want to work with AI must be prepared for an interdependence relationship between developers and experts: experts need developers to realize efficiencies and arrive at new data-driven insights, while developers rely on experts to select relevant data for the algorithm or make predictive predictions. complement models. For example, a predictive model can provide insight into historical patterns, but experts are needed to make the model future-proof by offering a vision of the future. This requires intensive cooperation and involvement from both parties, which starts with the development of AI systems.

‘Secondly, it is important that organizations think about the objectives and values ​​that they want to achieve with the AI ​​system. For example, an AI system can be introduced to make selection processes fairer, but it is important to realize that different parties in the organization (recruiters, managers, applicants) have different ideas about what is fair. These definitions cannot always be made explicit in advance or captured in fairness metrics, but sometimes only become clear when one starts working with the technology and is confronted with surprising results. Technology adaptability and human involvement are crucial to align technology with human goals and values.

Finally, introducing AI tools can have a greater impact on the role and position of recruiters within the organization. For example, we saw that recruiters transferred important decision-making tasks to the algorithm, such as selecting applicants based on expertise and experience. Instead, recruiters took on new tasks surrounding the algorithm such as collecting and selecting data, translating insights to managers, and resolving conflicts around algorithm decisions. This requires new skills where expertise, training and experience that was previously important for making selection decisions no longer always proves relevant in this new AI-driven environment. It remains an important question whether recruiters are able to strengthen their position within the organization by participating in these new activities or whether unique expertise is lost over time.’

And so the recruiter gets a new role?

‘In the organizations we monitored, AI insights are used during the entire selection process. However, the extent to which AI technology supports decisions differs. For example, AI predictions are used in the first selection round to filter out applicants, with recruiters selecting applicants based on their algorithmic score.

‘The role of the recruiter is very limited here: while recruiters first assessed applicants on the basis of their CV and motivation letter, the algorithm now determines what score an applicant is assigned and the recruiter acts on the basis of this score.

‘The role of recruiters now focuses more on supporting the design of the algorithm and supplying the right data. In later rounds of the selection process, AI predictions are used as a source of additional information. For example, managers get applicants’ algorithmic scores to prepare for interviews. Recruiters play an important role here in translating these scores into interview questions and action points for managers. They now guide managers more strongly on the basis of algorithmic insights.’

Do you know of examples at home or abroad where AI has already proven its worth with recruitment?

‘It remains a complex issue. There are big promises about the potential of AI technology, but empirical scientific research shows mixed results. For example, our own research shows that AI can help to create new insights and expose human bias, but that AI tools are also subject to algorithmic bias and sometimes select very similar applicants based on personality, which diversity of thought does not benefit. Especially in a human domain such as HR, where the environment is dynamic with changes in organizational culture, leadership and qualities of applicants that we consider important, AI tools have no chance of success without human adjustment.’

The full program of Emerce Recruitment is here to find. You buy tickets here.

Source: Nieuws – Emerce by www.emerce.nl.

*The article has been translated based on the content of Nieuws – Emerce by www.emerce.nl. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!

*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.

*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!