BIAS: Mitigating Diversity Biases of AI in the Labor Market
The EU Horizon project brings together an interdisciplinary consortium of nine partner institutions to develop a deep understanding of the use of AI in the employment sector and to detect & mitigate unfairness in AI-driven recruitment tools
- Lead-Departement Technik und Informatik
- Institut Institute for Data Applications and Security (IDAS)
- Forschungseinheit IDAS / Applied Machine Intelligence
- Förderorganisation Europäische Union
- Laufzeit (geplant) 01.11.2022 - 31.10.2026
- Projektverantwortung Prof. Dr. Mascha Kurpicz-Briki
- Projektleitung Prof. Dr. Mascha Kurpicz-Briki
- Projektmitarbeitende Dr. Alexandre Riemann Puttick
Norges Teknisk-Naturvitenskapelige Universitet
Globaz, S.A. (LOBA)
SMART VENICE SRL
FARPLAS OTOMOTIV ANONIM SIRKETI
- Schlüsselwörter bias, human resources, artificial intelligence, augmented intelligence, natural language processing
Artificial Intelligence (AI) is increasingly used in the employment sector. A recent Sage study found that 24% of companies use AI for recruitment purposes. This often involves Natural Language Processing (NLP) based AI-models to analyze text created by a job candidate. High profile cases have shown that such systems can reproduce social prejudices and unfairly discriminate against underrepresented minorities. This form of algorithmic bias is exacerbated by the fact that AI decision-making processes usually occur in a black-box, opaque even to the engineers who designed them. This results in systems capable of rendering unjust and unjustified decisions with low accountability, decisions that are often not subject to appeal on behalf of adversely affected human stakeholders. In practice, machine learning (ML) and NLP-based applications typically consist of off-the-shelf large-language models (LLMs) such as BERT, GPT models etc., which are then fine-tuned on a task-specific dataset; for example, an archive of job applications labeled according to whether the corresponding candidate was successful. Both the general language models employed, and the task-specific training data are potential sources of bias.
We aim to provide a scheme for bias detection and mitigation in AI-based recruitment, with an emphasis on building transparent, trustworthy systems that support human decision-making rather than replacing it (augmented intelligence). Existing methods provide a framework for investigating bias encoded within LLMs. This requires expert knowledge on language-, culture- and domain-specific prejudices in order to compile lists of sensitive words and concepts that can be used to identify encoded bias. These lists will be compiled by partners in our interdisciplinary consortium. Together with partner institutions, we will investigate methods to encourage fairer and more transparent AI-recruitment tools. We seek to build a foundation for tools to detect and mitigate both encoded bias in underlying language models, as well as the bias-sensitive aspects of input data, aiming to ensure that both AI-tools and employers use relevant information for hiring, rather than basing decisions on existing bias and irrelevant sensitive features (e.g., race, gender, sexual orientation...).
Using a transdisciplinary approach, the BIAS project will develop a deep understanding of the use of AI in the HR-sector and the ripple effects such tools have for all stakeholders. Our contribution will focus on laying the research foundation for bias detection and mitigation following EU guidelines for Trustworthy AI (ALTAI) and the principle of augmented intelligence. Beyond this project, our methods will be further developed into new deployment-ready recruitment tools. In addition, the work of other consortium members will culminate in policy proposals and training programs to promote the fair use of AI in Human Resources Management (HRM).