Author Archives: Alistair Carson

Henna Tahvanainen: An acoustician’s exploration at the nexus of guitar craftsmanship and technology

When: Wednesday 29nd November 2023 @ 5:15 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: An acoustician’s exploration at the nexus of guitar craftsmanship and technology.

Speakers: Henna Tahvanainen (University of Bologna)

Abstract

This talk delves into the intriguing role of an acoustician at the crossroads of science and lutherie, focusing on the guitar industry. The discussion centres on the research that goes into understanding the sound and vibration of guitars, and how new technologies can be applied in guitar making and string instrument making in general.

Speaker Bio
 

Henna is a Postdoctoral Fellow at the University of Bologna within the NEMUS project working on numerical restoration of the harpsichord and kantele. She has a Bachelor’s degree in Theoretical Physics from University of Helsinki (2009), and Master’s (2012) and PhD (2021) degrees  in Acoustics and Audio Signal Processing from Aalto University. She has also obtained a professional teacher certificate from the Tampere University of Applied Sciences. She has previously worked as a research engineer on acoustic guitars at Yamaha, Japan, and as an acoustic consultant specialised in ground-born vibration modelling and measurement. She is also a part-time lecturer in Digital Signal Processing and Acoustics at UniArts Helsinki.  She serves as a board member of the Acoustical Society of Finland and the EAA Technical Committee on Musical Acoustics. Henna is passionate about understanding, modelling, and developing all kinds of string instruments. She is a hobbyist player and an amateur maker of kantele.

Fabian Brinkmann: Head-related transfer function interpolation: an overview of algorithmic approaches

When: Wednesday 22nd November 2023 @ 5:15 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Head-Related Transfer Function Interpolation: An Overview of Algorithmic Approaches

Speakers: Fabian Brinkmann (Technical University of Berlin)

Abstract

Head-related transfer functions (HRTFs) describe the free field sound propagation from a source to the listener’s ears. Due to reflections and diffraction on the listener’s ears, head and torso, these filters contain information that is evaluated by the human auditory system to infer the position of the source. HRTFs are essential for almost all virtual and augmented reality systems that convey spatial auditory information. Since the auditory system is highly sensitive to changes in source position, such systems require HRTFs with high spatial resolution. In this talk I will give an overview of algorithmic, i.e. non-learning based, algorithms for HRTF interpolation that provide a way to obtain HRTFs in arbitrary spatial resolution. Finally, I will compare the performance of algorithmic and learning-based approaches. To make the talk accessible to audiences from other acoustic disciplines, I will start with a brief introduction to HRTFs and spatial hearing.

Speaker bio

Fabian Brinkmann received his M.A. in Communication Sciences and Technical Acoustics in 2011 and his Dr. rer. nat. in 2019 from the Technische Universität Berlin, Germany. He focuses on signal processing and evaluation approaches for spatial audio, including simulation and psycho acoustics of spatial hearing, room acoustic simulation, and technical and perceptual evaluation approaches for spatial audio algorithms. Since 2019, he is a postdoctoral researcher at the Audio Communication Group at TU Berlin, where he co-leads the Binaural Processing and Perception team.

 

Enzo De Sena: Low-complexity Room Acoustic Rendering for AR/MR/VR

When: Tuesday 28th March 2023 @ 5:15 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title:  Low-complexity Room Acoustic Rendering for AR/MR/VR

Speakers: Dr Enzo De Sena (University of Surrey)

Abstract

In order to achieve a high level of immersion in interactive AR/MR/VR applications it is important to render room acoustics appropriately. Room acoustic rendering methods can be classified on a scale that goes from very high computational complexity and high physical accuracy to low computational complexity and accuracy. At the former end of the scale are methods based on space and time (or frequency) discretisation of the wave equation, followed by geometrical acoustics models, which make the simplifying assumption that sound travels as rays. On the other end of the scale are artificial reverberators, i.e. perceptual models that do not render the acoustics of a specific room but rather render certain desirable perceptual qualities of reverberated sound. This talk will focus on methods that aim at bridging the gap between physical and perceptual models, with the objective of achieving reasonably accurate perceptual rendering with a low computational complexity. 

Speaker Bio

Enzo De Sena is a Senior Lecturer at the Institute of Sound Recording at the University of Surrey. He received the M.Sc. degree (cum laude) in Telecommunication engineering from the Università degli Studi di Napoli “Federico II,” Italy, in 2009 and the PhD degree in Electronic Engineering from King’s College London, UK, in 2013. Between 2013 and 2016 he was a postdoctoral researcher at KU Leuven, Belgium. He held visiting researcher positions at Stanford University, Aalborg University, Imperial College London and King’s College London. He is a Senior Member of the IEEE and a member of the IEEE Audio and Acoustic Signal Processing Technical Committee. He currently serves as Associate Editor for EURASIP JASM and IEEE/ACM TASLP. He is a recipient of an EPSRC New Investigator Award and a co-recipient of best paper awards at WASPAA-21 and AVAR-22. He is due to chair the 27th Int. Conf. On Digital Audio Effects (DAFx-24). His research interests include room acoustics modelling, sound field reproduction, beamforming and binaural modelling. For more information see: desena.org.

 

Amelia Gully: An introduction to the acoustics of the human voice for forensic speaker recognition

When: Wednesday 8th March 2023 @ 5:15 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: An introduction to the acoustics of the human voice for forensic speaker recognition.

Speakers: Dr Amelia Gully (University of York)

Abstract

In this talk I will briefly introduce the acoustics of the human voice and give an overview of my research exploring how differences in vocal anatomy contribute to speaker-specific differences in the speech signal. I will also provide a general introduction to the field of forensic audio analysis, with a specific focus on the field of forensic speaker recognition and how the acoustics of the human voice can be used to help solve crimes.

Speaker Bio

Amelia Gully is a Lecturer in Speech Science at the University of York, and a Senior Research Scientist at Oxford Wave Research Ltd. Her research uses numerical acoustic modelling and geometric morphometric techniques to explore the connections between speaker-specific differences in vocal tract anatomy and identifying characteristics in the speech signal. Her work contributes to research and teaching in forensic speech science and acoustics. She also explores the wider forensic applications of acoustics and audio signal processing methods through her work with Oxford Wave Research. She holds a BSc in Audio Technology from the University of Salford, and an MSc in Digital Signal Processing and a PhD in Electronic Engineering, both from the University of York. She is a member of the International Speech Communication Association and a committee member for the UK Acoustics Network Computational Acoustics special interest group.

 

Craig Webb: Developing plugins using physical modelling synthesis and JUCE

When: Wednesday 16th November 2022 @ 5:15 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Developing plugins using physical modelling synthesis and JUCE

Speakers: Dr Craig J. Webb (Physical Audio Ltd.)

Abstract

This talk will examine the development of a new instrument plugin that is based on physical modelling synthesis. It will give a code-level overview of the workflow processes that are involved in creating and optimising real-time audio software using the JUCE framework. There will also be a demo of the latest prototype.

Speaker Bio

Dr Craig J. Webb originally read Computer Science at the University of Bath. He then studied for the MSc in Acoustic & Music Technology at the University of Edinburgh, followed by a PhD as part of the NESS project ran by Prof Stefan Bilbao. His thesis was on parallelisation of algorithms for physical modelling synthesis. After post-doctoral work he began focussing on real-time audio systems and started Physical Audio Ltd.