Generative AI for the Sound Arts and Music Performance
Investigate the potential of Generative AI through an interdisciplinary project that explores the application of AI for Sound Art creation.
- Lead school School of Engineering and Computer Science
- Additional schools Bern Academy of the Arts
- Institute Institute for Data Applications and Security (IDAS)
- Research unit IDAS / Applied Machine Intelligence
- Funding organisation BFH
- Duration (planned) 01.10.2022 - 31.01.2023
- Project management Dr. Souhir Ben Souissi
- Head of project Dr. Souhir Ben Souissi
Prof. Dr. Teresa Carrasco
- Keywords Generative AI, Novel research, Sound Art, Augmented Intelligence
Generative AI is taking the academic and industrial world by storm. The ability of Deep Learning architectures such as Transformers to sustain textual conversations with humans, produce realistic and surrealistic images from textual descriptions or generate plausible chemical compositions for drug discovery are merely the first few prominent examples. Here at BFH, our research and educational efforts around AI and Deep Learning has thus far focused on Computer Vision and NLP (Natural Language Processing) for classification, segmentation, regression, prediction and decision making. With this project we aim to expand our portfolio to content generation with an intriguing and interdisciplinary case-study: The use of Generative AI for the Sound Arts.
Course of action
The project will run as a collaboration between the Engineering and Art Departments of BFH. From a computational perspective, we will explore: • Generation of music lyrics through transfer learning and LLMs (Large Language Models) in both co-operative (human/machine) and semi-independent mode (artificial lyrics produced from an initial seed). • Generation of MIDI music scores for different music genres, using RNNs (Recursive Neural Networks) and Transformers. Exploring both offline generation (with longer inference times) and on-line (with real-time segments generated during a performance). • Offline and real-time generation of contextual visualizations (images and video sequences) using transfer learning and diffusion models, that can accompany live music performances.