Category Archives: Room Acoustics

Dario D’Orazio: Measuring Room Impulse Responses in Noisy Environments

When: Wednesday 10th April 2019 @ 5:10 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Measuring Room Impulse Responses in Noisy Environments

Speaker: Dr Dario D’Orazio (Acoustics Research Group, University of Bologna, Italy)


Measurement techniques allow for the identification of acoustic impulse responses (IRs), ideally free of noise due to statistic properties of exciting signals (MLS, ESS, etc..). In ‘real’ cases the measured IRs may be affected by background noises (hums, impulses, speeches…). This lecture will present some practical cases, pointing out the strategies to enhance the measurement of IRs in noisy environments. These techniques concern the hardware setup of the measurement chain and the post-processing extraction of room criteria. Case studies will be exposed briefly, in order to improve measurements according to ISO 3382-1 (large halls), ISO 3382-2 (classrooms), and ISO 3382-3 (open-plan offices).

Speaker Bio

Dario D’Orazio obtained his M.Sc. degree in Electronic Engineering and his PhD in Applied Acoustics at the University of Bologna, IT, in 2007 and 2011 respectively. Currently is senior post doctoral fellow at the Department of Industrial Engineering at the same University. His researches involve room acoustics, material properties, classrooms and open-plan offices. He works also as a part-time acoustic consultant for opera houses (Galli Theatre of Rimini, Duse Theatre of Bologna), auditoria (Le Torri dell’acqua), classrooms (Former faculty of Letters and Philosophy at Bologna University), cinemas (Fulgor in Fellini’s house), worship spaces (Varignano Churches in Viareggio).

Archontis Politis: Reproducing recorded spatial sound scenes – Parametric and non-parametric approaches

When: Monday 1st April 2019 @ 5:30 PM

Where: Room 4.31/4.33, Informatics Forum, 10 Crichton St, University of Edinburgh

Title: Reproducing recorded spatial sound scenes – Parametric and non-parametric approaches

Speaker: Dr Archontis Politis (Tampere University, Finland)


Spatial sound reproduction methods for recorded sound scenes are an active field of research, in parallel with evolving vision-related or multi-modal technologies that aim to deliver a new generation of immersive multimedia content to the user. Contrary to previous channel-based surround approaches, modern spatial audio requirements demand methods that can handle fully-immersive content, and are flexible in terms of rendering capabilities to various playback systems. This presentation gives an overview of such methods, with a distinction between parametric and non-parametric methods. Non-parametric methods make no distinction of the sound scene itself, and distribute the recordings to the playback channels based on specifications of the recording setup and the playback system only. Parametric methods assume additionally a model of the sound scene content and aim to adaptively estimate its parameters from the recordings. Some representative approaches from both categories are presented, with emphasis on some of the methods co-developed by the presenter at the Acoustics Lab of Aalto University, Finland.

Speaker Bio

Archontis Politis obtained his M.Eng. degree in Civil Engineering at Aristotle’s University of Thessaloniki, Greece, and his M.Sc. degree in Sound & Vibration Studies at ISVR, University of Southampton, UK, in 2006 and 2008 respectively. From 2008 to 2010 he worked as a graduate acoustic consultant at Arup Acoustics, Glasgow, UK, and as a researcher in a joint collaboration between Arup Acoustics and the Glasgow School of Arts, on interactive auralization of architectural spaces using 3D sound techniques. In 2016 he obtained his doctoral degree on the topic of parametric spatial sound recording and reproduction from Aalto University, Finland. He has also completed an internship at the Audio and Acoustics Research Group of Microsoft Research, during summer of 2015. He is currently a post-doctoral researcher at Tampere University, Finland. His research interests include spatial audio technologies, virtual acoustics, array signal processing and acoustic scene analysis.

Braxton Boren: Acoustic Simulation of Soundscapes from History

When: Monday 11th March 2019 @ 5:10 PM

Where: Atrium (G.10), Alison House, Nicolson Square

Title: Acoustic Simulation of Soundscapes from Historys

Speaker: Dr Braxton Boren (American University, USA)


Computational acoustic simulation has been used as a tool realize unbuilt spaces in the case of architectural design, or purely virtual spaces in the case of video game audio. However, another important application of this technology is the capacity to recreate sounds and soundscapes that no longer exist for historical and musicological research. Most of our historical knowledge is visually oriented – the size, color, or texture of places and people has been able to be recorded in some form for millennia. Conversely sound is instead transient, quickly decaying, and was not able to be recorded generally until the 19th century. Because of this, our conception of history is more like a photo album than a movie – the sounds of performance spaces or charismatic speakers are mostly left to our imagination. However, in the past decade, computational acoustic simulation has allowed a lens into sounds from the past, allowing us to predict with high accuracy the role of sound in different spaces and historical contexts. This talk will give examples of using acoustic modeling to simulate the influence of changing church acoustics on Western music, focusing especially on the examples of Renaissance Venice and Baroque Leipzig. The talk will also examine the role of acoustics and speech intelligibility on oratory and speeches to large crowds before electronic amplification was available, focusing on the examples of George Whitefield in 18th century London and Julius Caesar during the Roman Civil War.

Speaker Bio

Braxton Boren is Assistant Professor of Audio Technology at American University, where he joined the faculty in Fall 2017. He received a BA in Music Technology from Northwestern University, where he was the valedictorian of the Bienen School of Music in 2008. He was awarded a Gates Cambridge Scholarship to attend the University of Cambridge to research computational acoustic simulation, where he earned his MPhil in Physics in 2010. He completed his Ph.D. in Music Technology at MARL, the Music and Audio Research Laboratory at New York University in 2014. He worked as a postdoctoral researcher working in spatial audio over headphones at Princeton University’s 3D Audio and Applied Acoustics Laboratory from 2014-2016. He taught high school Geometry from 2016-2017 in Bedford-Stuyvesant, NY.

Jens Ahrens: Current Trends in Binaural Auralization of Microphone Array Recordings

When: Wednesday 9th January 2018 @ 5:10 PM

Where: Room 4.31/4.33, Informatics Forum, 10 Crichton Street

Title: PCurrent Trends in Binaural Auralization of Microphone Array Recordings

Speaker: Dr Jens Ahrens (Chalmers University of Technology, Sweeden)


Many approaches for the capture and auralization of real acoustic spaces have been proposed over the past century. Limited spatial resolution on the capture side has typically been the factor that caused compromises in the achievable authenticity of the auralization. Recent advancements in the field of microphone arrays provide new perspective particularly for headphone-based auralization. It has been shown that head-tracked binaural auralization of the data captured by a bowling-ball-sized spherical array of around 90 microphones allows for creating signals at the ears of the listener that are perceptually almost indistinguishable from the ear signals that arise in the original space. Promising results have also been obtained based on smaller arrays with fewer microphones. In the present talk, we provide an overview of the current activities in the research community and demonstrate the latest advancements and remaining challenges.

Speaker Bio

Jens Ahrens has been an Associate Professor and head of the Audio Technology Group within the Division of Applied Acoustics at Chalmers since 2016. He has also been a Visiting Professor at the Applied Psychoacoustics Lab at University of Huddersfield, UK, since 2018. Jens received his Diploma (equivalent to a M.Sc.) in Electrical Engineering/Sound Engineering jointly from Graz University of Technology and the University of Music and Dramatic Arts, Graz, Austria, in 2005. He completed his Doctoral Degree (Dr.-Ing.) at the Technische Universität Berlin, Germany, in 2010. From 2006 to 2011, he was a member of the Audio Technology Group at Deutsche Telekom Laboratories / TU Berlin where he worked on the topic of sound field synthesis. From 2011 to 2013, Jens was a Postdoctoral Researcher at Microsoft Research in Redmond, Washington, USA. Thereafter, he re-joined the Quality and Usability Lab at the Technische Universität Berlin. In the fall and winter terms of 2015/16, he was a Visiting Scholar at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, California, USA.

Jonathan Hargreaves: Auralising rooms from computer models

When: Tuesday 4th October, 2016 @ 5:10 PM

Where: Room 4.31/4.33, Informatics Forum, University of Edinburgh

Seminar Title

Auralising rooms from computer models: Seeking synergies between geometric and wave-based techniques

Seminar Speaker

Dr. Jonathan Hargreaves (Acoustics Research Centre, University of Salford).

Seminar Abstract

Auralisation of a space requires measured or simulated data covering the full audible frequency spectrum. For numerical simulation this is extremely challenging, since that bandwidth covers many octaves in which the wavelength changes from being large with respect to features of the space to being comparatively much smaller. Hence the most efficient way of describing acoustic propagation changes from wave descriptions at low frequencies to geometric ray and sound-beam energy descriptions at high frequencies, and these differences are reflected in the disparate classes of algorithms that are applied. Geometric propagation assumptions yield efficient algorithms, but the maximum accuracy they can achieve is limited by how well the geometric assumption represents sound propagation in a given scenario; this comprises their accuracy at low frequencies in particular. Methods that directly model wave effects are more accurate but have a computational cost that scales with problem size and frequency, thereby limiting them to small or low frequency scenarios. Hence it is often necessary to operate two algorithms in parallel handling the different bandwidths. Due to their differing formulations however, combing the output data can be a rather arbitrary process.

This talk will attempt to address this disparity by examining synergies between the Boundary Element Method (BEM) and geometric approaches. Specifically, it will focus on how the use of appropriately chosen oscillatory basis functions in BEM can produce leading-order geometric behaviour at high frequencies. How this produces synergies that might ultimately lead to a single unified full-bandwidth algorithm for early-time will be discussed.

Speaker Bio

Jonathan was awarded an MEng in Engineering & Computing Science from the University of Oxford in 2000 and a PhD in Acoustic Engineering from the University of Salford in 2007, where he remains as a Lecturer in Acoustics, Audio and Broadcast Engineering. The title of his PhD thesis was “Time Domain Boundary Element Methods for Room Acoustics” and this remains influential in his current research areas. Jonathan has had the pleasure of being involved in a wide variety of public engagement activities, including a number of TV appearances, and is passionate about performing, engineering and enjoying live music. He was awarded the UK Institute of Acoustics’ Tyndall Medal in September 2016.

Federico Avanzini: A mixed structural modeling approach to personalized 3D binaural sound rendering

When: Thursday, May 15th, 2014 @ 5:00 PM – 6:00 PM

Where: Room G.07/a, Informatics Forum, Crichton Street, University of Edinburgh

Seminar Title

A mixed structural modeling approach to personalized 3D binaural sound rendering

Seminar Speaker(s)

Dr. Federico Avanzini (Sound and Music Computing Group, Dept. of Information Engineering, University of Padova, Italy)

Seminar Abstract

This seminar presents ongoing research on a novel approach to 3D sound rendering, that uses personalized head-related transfer functions (HRTFs) to synthesize binaural sound signals delivered through headphones.

We will briefly discuss issues related to HRTF measurement, processing, and modeling, as well as their implications in a 3D sound rendering pipeline. As these transfer functions are strongly subject dependent, special emphasis will be given to the problem of individual HRTF measurements and individual customization of HRTF models.

We will then introduce and formalize a novel approach to HRTF modeling, called Mixed Structural Modeling (MSM). This can be regarded as a generalization and extension of the structural modeling approach first defined by Brown and Duda back in 1998. Thanks to the flexibility of the MSM approach, a large number of solutions for building custom binaural audio displays can be considered and evaluated, with the final goal of constructing a HRTF model that is fully customizable depending on individual user anthropometry.

Possible solutions for building partial HRTFs (pHRTFs) of the head, torso, and pinna of a specific listener will described and exemplified. Some example applications in the design of personal auditory displays in multimodal virtual environments will be finally illustrated.

Speaker Bio

Federico Avanzini received the Ph.D. degree in Information Engineering from the University of Padova, Italy, in 2001. Since 2002 he has been with the Sound and Music Computing group at the Department of Information Engineering of the University of Padova, where he is currently Assistant Professor. His main research interests are in sound synthesis and processing, and multimodal interaction. He has been key researcher in numerous european and national projects, PI of the EU project DREAM (Culture2007) and of industry-funded projects.

He has authored more than 100 publications on peer-reviewed international journals and conferences. He has served in numerous program committees and editorial committees of international journals and conferences. He was the General Chair of the 2011 International Sound and Music Computing Conference. He is currently Associate Editor of Acta Acustica united with Acustica, journal of the European Acoustics Association, for the subject Musical Acoustics.

Lauri Savioja: Room Acoustics Modelling and Parallel Computation

When: November 6th, 2013 @ 5:00 PM – 6:00 PM

Where: Room 4.33, Informatics Forum, Crichton Street, University of Edinburgh

Seminar Title

Room Acoustics Modelling and Parallel Computation

Seminar Speaker(s)

Prof. Lauri Savioja (Aalto University, Finland)

Seminar Abstract

All room acoustic modeling techniques are computationally demanding due to the underlying complexity of sound propagation. Solving the wave equation numerically is the most rigorous way to model room acoustics, but at higher frequencies the computational costs become excessive. For this reason, at that range acoustic simulations are typically based on an approximation known as geometrical acoustics in which sound is supposed to act as rays. This presentation will give an overview of different room acoustic modeling techniques.

Traditionally, all the modeling methods have been implemented for the central processing unit (CPU) of the computer. However, recent trends in the development of graphics processing units (GPU) and other many-core architectures is resulting in a paradigm shift. Modern GPUs are massively parallel processors, whose peak computational performance can be 10X more than that of current CPUs. This presentation aims to give an overview of GPU architectures and discuss how to design algorithms that map well to a GPU.

The main focus of this presentation is to bind the two above mentioned areas together, and to present how GPUs can be utilized in room acoustic modeling. The case studies illustrate that both the numerical wave-equation solvers and geometrical acoustics techniques can gain remarkable performance boosts by GPUs. Current and future foreseen performance bottlenecks and challenges are discussed as well.


Speaker Bio

Lauri Savioja received the M.Sc., the Licentiate of Science, and the Doctor of Science in Technology, from the Helsinki University of Technology (TKK), Espoo, Finland, in 1991, 1995, and 1999, respectively. In all those degrees, he majored in computer science, and the topic of his doctoral thesis was room acoustic modeling.

He worked at the TKK Laboratory of Telecommunications Software and Multimedia as a researcher, lecturer, and professor from 1995 till the formation of the Aalto University where he currently works as a professor in the Department of Media Technology, School of Science. The academic year 2009-2010, he spent as a visiting researcher at NVIDIA Research.  His research interests include room acoustics, virtual reality, and parallel computation.

Prof. Savioja is a senior member of IEEE, member of ACM, and AES, and a life member of the Acoustical Society of Finland. From 2010 he has been an Associate Editor of the IEEE Transactions on Audio, Speech, and Language Processing.

Damian Murphy: Strategies in Acoustic Simulation – Big Rooms and Small Voices

When: November 21, 2012 @ 5:30 PM – 6:30 PM

Where: Room 4.33, Informatics Forum, Informatics Forum, University of Edinburgh

Seminar Title

Strategies in Acoustic Simulation – Big Rooms and Small Voices

Seminar Speaker

Dr Damian Murphy (SoundLab, University of York, UK)

Seminar Abstract

Acoustic modelling and simulation work in the AudioLab, University of York, has focused most recently in two areas. In the first, 3-D models of the vocal tract have been measured using MRI, and acoustic impulse responses obtained based on 3-D Digital Waveguide Mesh techniques. This seminar will present the results of a study that worked with five professional singers to obtain 3-D physiological data to be used as the basis for these models, as well as appropriate source material for model excitation, and acoustic data for benchmarking the audio quality of the final simulations. At the other end of the modelling scale, research in auralization aims to simulate arbitrary, large-volume 3-D geometries, as found in concert halls or auditoria. In this case, 3-D Finite Difference Time Domain methods are used, although the size of the problem domain implies that full audio bandwidth is still unrealistic for reasonable computation times. A hybrid modelling solution is therefore employed with a view to obtaining results in a timeframe more appropriate to be useful as part of the acoustic design and auralization process.

Speaker Bio

Dr Damian Murphy is Reader in the AudioLab, Department of Electronics, University of York, where he has been since 2000. His research focuses on virtual acoustics, spatial audio, physical modelling, and audio signal processing and he has been principal investigator on a number of AHRC and EPSRC funded projects in these areas. He has published over 80 journal articles, conference papers and books in the area and is a member of the Audio Engineering Society and a Fellow of the Higher Education Academy. He is a visiting lecturer to the Department of Speech, Music and Hearing at KTH, Stockholm, where he specialises in spatial audio and acoustics and has held visiting research status at a number of universities internationally.

Dr Murphy is also an active sound artist and in 2004 was appointed as one of the UK’s first AHRC/ACE Arts and Science Research Fellows, investigating the compositional and aesthetic aspects of sound spatialisation, acoustic modelling techniques and the acoustics of heritage spaces. His work has been presented in galleries nationally and at festivals and venues internationally and included varied collaborations with interactive digital and visual artists, photographers, poets and archaeologists.