Julian Parker: Recent advances in generative modelling of musical audio

When: Wednesday 5th June @ 5:15 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Recent advances in generative modelling of musical audio

Speakers: Julian Parker (Stability AI)

Abstract

Designing systems which can autonomously generate music is an activity that dates back to classical antiquity, and has received much attention over the years in both the artistic and scientific communities. However such systems were usually limited to a narrow aesthetic scope, either by necessity or design (or both). The recent surge of advancements in generative AI has opened the door to the existence of systems that can generate musical audio with a much wider gamut of aesthetics, encompassing a large part of the range of human musical practice. In this talk I give an overview of the techniques used to construct such systems, examine their current strengths and weaknesses, and share my opinion on the future of this technology – particularly in its relationship to human creativity.

Speaker Bio

Julian Parker, born in the UK, holds a B.A. (Hons.) in natural sciences from the University of Cambridge (2005), UK, an M.Sc. in acoustics & music technology from the University of Edinburgh (2008), UK,  and a D.Sc. (Tech.) in audio signal processing and acoustics from Aalto University, Finland (2013). His doctoral work focused on computational modelling of dispersive physical systems in the audio frequency range. He has held leadership positions in industrial research at companies such as Native Instruments, TikTok and Stability AI, focusing on processing and generating of musical sound using both traditional signal processing techniques, machine learning and AI. His current research interests are in generative modelling of musical audio, and in the intersection between signal processing and neural network structures.