New Horizon Europe project explores discrimination by AI systems

02.11.2022 Artificial intelligence can make many jobs easier, including in the area of human resources. The AI systems, however, are often not exempt from biases, but reproduce them. Reasons and solutions will be explored in a recently approved Horizon Europe project, in which the Applied Machine Intelligence research group of the BFH is involved.

When it comes to recruitment and promotion processes, companies are increasingly looking to artificial intelligence (AI) for assistance, as it simplifies and speeds up many tasks. However, AI systems too can discriminate against people, by reproducing prejudices. How this comes about and what solutions are available in the field of HR management will be examined in the recently approved Horizon Europe project “BIAS: Mitigating Diversity Biases of AI in the Labor Market”.

Training courses on developing and applying AI systems

In the interdisciplinary project, the researchers will begin by investigating from a technical perspective how AI systems can have biases, and what concrete solutions there are to prevent this. This will be followed by extensive ethnographic field studies on the experience of staff, HR managers and technology developers, analysing the problem from a social science perspective. Ultimately, the findings will be used to develop training courses and guidelines to guide both HR managers and technology developers in the development and implementation of AI systems, to ensure that they function with a minimum of bias.

BFH assumes technical lead

The project team, led by the Norwegian University of Science and Technology NTNU, comprises a Europe-wide consortium that also includes Bern University of Applied Sciences BFH. Prof. Dr Mascha Kurpicz-Briki, deputy head of the Applied Machine Intelligence Research Group at the Institute for Data Applications and Security IDAS, is the project leader on behalf of BFH and takes the technical lead in the project with her team.

In addition to managing the technical work package, the researchers are specifically analysing how discrimination can be made measurable at various points in text-based AI and investigating how such discrimination can be prevented. This involves not only texts provided to the AI as training data, but also the language models that are often used in such applications.

AI systems can discriminate against people, by reproducing prejudices.

Find out more