BIAS: Mitigating Diversity Biases of AI in the Labor Market

The EU Horizon project brings together an interdisciplinary consortium of nine partner institutions to develop a deep understanding of the use of AI in the employment sector and to detect & mitigate unfairness in AI-driven recruitment tools

Factsheet

  • Lead school School of Engineering and Computer Science
  • Institute Institute for Data Applications and Security (IDAS)
  • Research unit IDAS / Applied Machine Intelligence
  • Funding organisation Europäische Union
  • Duration (planned) 01.11.2022 - 31.10.2026
  • Project management Prof. Dr. Mascha Kurpicz-Briki
  • Head of project Prof. Dr. Mascha Kurpicz-Briki
  • Project staff Dr. Alexandre Riemann Puttick
  • Partner Staatssekretariat für Bildung, Forschung und Innovation SBFI
    European Commission
    Norges Teknisk-Naturvitenskapelige Universitet
    Háskóli Íslands
    Globaz, S.A. (LOBA)
    CROWDHELIX LIMITED
    SMART VENICE SRL
    UNIVERSITÄT LEIDEN
    DIGIOTOUCH OU
    FARPLAS OTOMOTIV ANONIM SIRKETI
  • Keywords Bias, human resources, artificial intelligence, augmented intelligence, natural language processing

Situation

Artificial Intelligence (AI) is increasingly used in the employment sector. A recent Sage study found that 24% of companies use AI for recruitment purposes. This often involves Natural Language Processing (NLP) based AI-models to analyze text created by a job candidate. High profile cases have shown that such systems can reproduce social prejudices and unfairly discriminate against underrepresented minorities. This form of algorithmic bias is exacerbated by the fact that AI decision-making processes usually occur in a black-box, opaque even to the engineers who designed them. This results in systems capable of rendering unjust and unjustified decisions with low accountability, decisions that are often not subject to appeal on behalf of adversely affected human stakeholders. In practice, machine learning (ML) and NLP-based applications typically consist of off-the-shelf large-language models (LLMs) such as BERT, GPT models etc., which are then fine-tuned on a task-specific dataset; for example, an archive of job applications labeled according to whether the corresponding candidate was successful. Both the general language models employed, and the task-specific training data are potential sources of bias.

Course of action

We aim to provide a scheme for bias detection and mitigation in AI-based recruitment, with an emphasis on building transparent, trustworthy systems that support human decision-making rather than replacing it (augmented intelligence). Existing methods provide a framework for investigating bias encoded within LLMs. This requires expert knowledge on language-, culture- and domain-specific prejudices in order to compile lists of sensitive words and concepts that can be used to identify encoded bias. These lists will be compiled by partners in our interdisciplinary consortium. Together with partner institutions, we will investigate methods to encourage fairer and more transparent AI-recruitment tools. We seek to build a foundation for tools to detect and mitigate both encoded bias in underlying language models, as well as the bias-sensitive aspects of input data, aiming to ensure that both AI-tools and employers use relevant information for hiring, rather than basing decisions on existing bias and irrelevant sensitive features (e.g., race, gender, sexual orientation...).

Looking ahead

Using a transdisciplinary approach, the BIAS project will develop a deep understanding of the use of AI in the HR-sector and the ripple effects such tools have for all stakeholders. Our contribution will focus on laying the research foundation for bias detection and mitigation following EU guidelines for Trustworthy AI (ALTAI) and the principle of augmented intelligence. Beyond this project, our methods will be further developed into new deployment-ready recruitment tools. In addition, the work of other consortium members will culminate in policy proposals and training programs to promote the fair use of AI in Human Resources Management (HRM). This work has received funding from the Swiss State Secretariat for Education, Research and lnnovation (SERI).

This project contributes to the following SDGs

  • 3: Good health and well-being
  • 5: Gender equality
  • 8: Decent work and economic growth
  • 10: Reduced inequalities
  • 16: Peace, justice and strong institutions