Eero-Pekka Damskägg: Real-Time Neural Audio Models

When: 30th April 2026 at 5.15pm.

Where: Usha Kasera Lecture Theatre, University of Edinburgh, South Bridge, Edinburgh EH8 9YL

Title: Real-Time Neural Audio Models

Speaker: Dr. Eero-Pekka Damskägg

Abstract Neural networks for audio processing are usually trained offline with tools like PyTorch or JAX, but practical applications often require the inference to run in real time with low latency. In this talk, we will look at what changes when moving to real time block processing, and why latency constraints need to be taken into account already during model design. We introduce the idea of making models explicitly stateful, including hidden states for RNNs, ring buffers for convolutional and transformer layers, and filter states, and explore how these choices affect latency. We will demonstrate the concepts by building a U-Net-style model designed for real time use.

Bio Eero-Pekka Damskägg works in audio machine learning. He did his research at Aalto University, focusing on using neural networks to model guitar amplifiers. He later joined Neural DSP as an early team member, where he worked on Neural Capture, helped develop a robotic system for measuring and training guitar amp models, and contributed to audio effects such as pitch shifters and guitar synthesizers.

He now works as an independent consultant, collaborating with audio companies on machine learning and signal processing.