- Story
AI in higher education: an interplay between ethics, data protection and experimentation
23.09.2025 In September, BFH lecturers met in Bern for Teaching Day. With guest speaker and AI expert Jeppe Stricker, they discussed the future of teaching in the age of artificial intelligence.
Key points at a glance
- Jeppe Stricker advises educational institutions on how to deal with artificial intelligence.
- On Teaching Day, he raised the question of the future of teaching.
- His conclusion: artificial intelligence is challenging traditional exams.
- Education providers could approach AI in two ways: normatively or deliberately experimentally.
The expert in artificial intelligence in education and keynote speaker at Teaching Day 2025 began his presentation with what he considers to be the all-important question: “What should the classroom of the future look like?” He showed the audience a picture of a historic classroom in Denmark, with pupils lined up at small desks and the teacher standing and dominating the room.
About Teaching Day
Teaching Day is the annual meeting of BFH lecturers (formerly ‘Rendez-vous Lehre’ and ‘E-Learning Day’), who have the opportunity to exchange views on contemporary teaching trends and best practices.
Organised by the Office of the Vice-President Teaching, the event is held alternately in Bern and Biel/Bienne. This year, it focussed on the strategic thematic field ‘Humane Digital Transformation’.
His question was meant to go far beyond the confines of the classroom, its blackboards, projectors and the arrangement of desks. Despite the stark contrast, many educational institutions continue to adhere to the principle of the traditional classroom. The central role of the teaching staff, static and often printed learning materials and, most notably, written examinations and graduation theses are still commonplace.
AI and exams: a wicked problem
Generative artificial intelligence (genAI) is a tool that students can use to write up their thesis or exam answers. However, it challenges the validity of such competency assessments. After all, what does an AI-generated answer say about a student’s abilities and skills?
Conventional competency assessments form a stark contrast to modern tools such as ChatGPT.
“Conventional competency assessments form a stark contrast to modern tools such as ChatGPT”, summarised Jeppe Stricker. Educational institutions are falling behind in this area. He suggested thinking about portfolio assignments and process-based ways to assess students’ competencies and skills.
At the same time, he demonstrated an understanding of the reluctance of higher education institutions to adapt their tried and tested examination formats. Adjustments made to one aspect of higher education will inevitably require adjustments elsewhere. For example, changes to teaching methods or the overall structure of curricula will need to be considered.
AI has some room for improvement
Furthermore, a lot of aspects of AI still need to be improved, at least at the present time, stressed Jeppe Stricker. He showed examples of AI-generated videos and images that commercial AI providers use to prove how well their tools work. The fact that these glossy outputs were post-processed with great (human) effort puts the providers in a bad light and calls into question the actual usefulness of the advertised tools.
How can we ensure the ethical use of a technology that is not inherently ethical?
Stricker also raised concerns regarding the manner in which AI providers manage data. Data protection laws and internal directives are handled very carelessly. Stricker also criticised the ecological impact of AI tools, which AI providers often counter with greenwashing.
Following on from the work of Fengchun Miao, AI and education specialist at UNESCO, he added: “How can we ensure the ethical use of a technology that is not inherently ethical?” Stricker had no clear answer for the lecturers present. However, it seems inevitable that AI will shape – and is already shaping – education.
Concepts, testing and small steps
The fundamental question, therefore, is not whether we should utilise AI in teaching, but rather the question posed by Stricker at the commencement of his presentation: what should teaching look like in the future?
On the one hand, directives and AI policies represent undoubtedly a good initial step towards the future of teaching. However, it is important to transition from ‘talking about AI’ to ‘doing with AI’. It is only through practical implementation that the often vague AI policies can be made concrete and thus effectively implemented.
Many institutions find it challenging to adopt AI.
“Many institutions find it challenging to adopt AI”, adds Stricker. Therefore, in addition to developing the conceptual aspects, it is also necessary to create protected spaces within which to test AI. This does not have to involve the institution’s own AIs right from the start. Instead, teaching staff should be allowed and encouraged to take the first small steps in shielded, safe environments to empower themselves and their teaching in the age of AI.