Augusto Sarti: Plenacoustic Capturing and Rendering – A New Paradigm for Immersive Audio

When: Thursday 29th November 2018 @ 5:10 PM

Where: Atrium, Alison House, 12 Nicolson Square

Title: Plenacoustic Capturing and Rendering – A New Paradigm for Immersive Audio

Speaker: Augusto Sarti (Dipartimento di Elettronica, Informazione e Bioingegeria (DEIB), Politecnico di Milano, Italy)

Abstract

Acoustic signal processing traditionally relies on divide-and-conquer strategies (e.g. Plane-Wave Decomposition) applied to descriptors such as the acoustic pressure, which are scalar functions of space and time, hence the name “Space-Time Audio Processing”. A plethora of applications have been developed, which use arrays or spatial distributions of microphones and/or loudspeakers or cluster thereof. The typical applications are localizing/tracking, characterizing and extracting acoustic sources; as well as estimating, processing and rendering acoustic fields. Such applications, however, are often hindered by the inherent limits of geometrical acoustics; by the far-field hypothesis of Fourier acoustics; and by the adverse acoustics of everyday environments.

In this talk I will discuss viable alternatives to traditional approaches to space-time processing of acoustic fields, based on alternate formulations and representations of soundfields. I will first discuss how the geometric and acoustic properties of the environment’s reflectors can be estimated and even used for boosting space-time audio processing algorithms. I will then introduce a soundfield representation that uses descriptors that are defined in the so-called “ray space” and show how this can lead to applications such as interactive soundfield modeing, processing and rendering. I will finally discuss how to rethink our signal decomposition strategy by introducing a novel wave-field decomposition methodology based on Gabor frames, which is more suitable for local (in the space-time domain) representations. Based on this new framework for computational acoustics, I will introduce the ray-space transform and show how it can be used for efficiently and effectively approaching a far wider range of problems than source source separation and extraction, pushing the boundaries of environment’s inference; object-based manipulation of acoustic wavefields; interactive acoustics; and more.

Speaker Bio

Ph.D. in information engineering (1993) from the University of Padua, Italy, with a joint program with UC Berkeley, focusing on nonlinear system theory. With the Politecnico di Milano since 1993, where he is now a full professor. Double affiliation with UC Davis from 2013 to 2017, as a full professor. Research interests in the area of digital signal processing, with focus on space-time audio processing, sound analysis, synthesis and processing, image analysis, and 3D vision. Main contributions in the area of sound synthesis (nonlinear wave digital filters); of space-time audio processing (plenacoustic processing, visibility-based interactive acoustic modeling, geometry-based acoustic scene reconstruction; soundfield rendering, etc.); nonlinear system theory (Volterra system inversion, Wave Digital circuit simulation); computer vision; image analysis and processing. Senior Member of IEEE, Senior Area Editor of IEEE Signal Processing Letters, Associate Editor of IEEE Tr. on Audio Speech and Language Processing. Member elect of the IEEE Audio and Acoustic Signal Processing TC, founding member of the European Acoustics Association (EAA) TC on Audio Signal Processing. Chairman of the EURASIP Special Area Team of Acoustic, Sound and Music Signal Processing. Member elect of the Board of Directors of EURASIP.