BFH-TI paper on bias in language models at the SwissText and KONVENS 2020 conference
27.07.2020 It is often reported that algorithms based on one-sided data make discriminatory decisions. Research has shown that such bias can be measured in English pretrained language models in particular.
Prof. Dr Mascha Kurpicz-Briki of the ICTM has examined whether German and French language models also demonstrate gender or background bias. She presented the results in June 2020 at the SwissText and KONVENS 2020 conference and was able to show that social stereotypes were also present in these language models.