Category Archives: Musical Acoustics Computational Modelling

Craig Webb: Developing plugins using physical modelling synthesis and JUCE

When: Wednesday 16th November 2022 @ 5:15 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Developing plugins using physical modelling synthesis and JUCE

Speakers: Dr Craig J. Webb (Physical Audio Ltd.)

Abstract

This talk will examine the development of a new instrument plugin that is based on physical modelling synthesis. It will give a code-level overview of the workflow processes that are involved in creating and optimising real-time audio software using the JUCE framework. There will also be a demo of the latest prototype.

Speaker Bio

Dr Craig J. Webb originally read Computer Science at the University of Bath. He then studied for the MSc in Acoustic & Music Technology at the University of Edinburgh, followed by a PhD as part of the NESS project ran by Prof Stefan Bilbao. His thesis was on parallelisation of algorithms for physical modelling synthesis. After post-doctoral work he began focussing on real-time audio systems and started Physical Audio Ltd.

Romain Michon: Embedded Real-Time Audio DSP With the Faust Programming Language

When: Wednesday 6th November 2019 @ 5:10 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Embedded Real-Time Audio DSP With the Faust Programming Language

Speakers: Dr Romain Michon (CCRMA, Stanford + GRAME-CNCM, Lyon, France)

Abstract

Faust is a Domain-Specific programming Language (DSL) for real-time audio Digital Signal Processing (DSP). The Faust compiler can generate code in various lower-level programming languages (e.g., C, C++, LLVM, Rust, WebAssembly, etc.) from high-level DSP specifications. Generated code can be embedded in wrappers to add specific features to it (e.g., MIDI, polyphony, OSC, etc.) and to turn it into ready-to-use objects (e.g., audio plugins, standalones, mobile apps, web apps, etc.). More recently, Faust has been used a lot in the context of low-level audio embedded systems programming such as microcontrollers, bare-metal on the Raspberry Pi, FPGAs, etc. Optimizations are made for specific processor architectures (e.g., use of intrinsics, etc.) but hidden from the user to keep the programming experience as smooth and as easy as possible. After giving a quick introduction to Faust, we’ll present an overview of the work that has been made by the Faust team around embedded systems for audio. We’ll then present ongoing and future projects around this topic.

Speaker Bio

Romain Michon is a full-time researcher at GRAME-CNCM (Lyon, France) and a researcher and lecturer at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University (USA). He has been involved in the development of the Faust programming language since 2008 and he’s now part of the core Faust development team at GRAME. Beside that, Romain’s research interests involve embedded systems for real-time audio processing, Human Computer Interaction (HCI), New Interfaces for Musical Expression (NIME), and physical modeling of musical instruments.

Chris Buchanan: Singing Synthesis

When: Wednesday 2nd October 2019 @ 5:10 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Singing Synthesis

Speaker: Chris Buchanan (Cereproc, Edinburgh, UK)

Abstract

In the last two years speech synthesis technology has changed beyond recognition. Being able to create seamless copies of voices is a reality, and the manipulation of voice quality using synthesis techniques can now produce dynamic audio content that is impossible to differentiate from natural spoken output. Speech synthesis is also vocal editing software. It will allow us to create artificial singing that is better than many human singers, graft expressive techniques from one singer to another, and using analysis-by-synthesis categorise and evaluate singing far beyond simple pitch estimation. How we apply and interface with this technology in the musical domain is at a cusp. It is the audio engineering community that will, in the end, dictate how these new techniques are incorporated into the music technology of the future. In this talk we approach this expanding field in a modern context, give some examples, delve into the multi-faceted nature of singing user interface and obstacles still to overcome, illustrate a novel avenue of singing modification, and discuss the future trajectory of this powerful technology from Text-to-Speech to music and audio engineering platforms.

Speaker Bio

Chris Buchanan is Audio Development Engineer for CereProc. He graduated in the Acoustics & Music Technology MSc here at the University of Edinburgh in 2016, after 3 years as a signal processing geophysicist at the French seismic imaging company CGG. He also holds a BSc degree in Mathematics from the same university. As a freelancer, Chris has been involved with the core DSP driving technology offered by the likes of Goodhertz, Signum Audio in their professional dynamics monitoring suite, and published research on 3D audio in collaboration with the Acoustics & Audio Group here. His research interests focus mainly on structural modelling/synthesis of the human voice andreal-time 3D audio synthesis via structural modelling of the Head-Related Transfer Function. More recently he’s taken on the challenge of singing synthesis, helping produce one of the world’s first Text-to-Singing (TTS) systems and thus enabling any voice to sing.

Kurt Werner: “Boom Like an 808” – Secrets of the TR-808 Bass Drum’s Circuit

When: Wednesday 27th March 2019 @ 5:10 PM

Where: Room 4.31/4.33, Informatics Forum, 10 Crichton St, University of Edinburgh

Title: “Boom Like an 808″ – Secrets of the TR-808 Bass Drum’s Circuit

Speaker: Dr Kurt Werner (Queen’s University Belfast, UK)

Abstract

The Roland TR-808 kick drum is among the most iconic sounds in all of popular music. Analogue drum machines like the TR-808 produce simulacra of percussive sounds using electrical “voice circuits,” whose schematics I treat as a primary text to be read alongside their reception history. I argue that these voice circuits and their schematics are the key to recovering a holistic history of analog drum synthesis. In this seminar, I’ll present a close reading of the TR-808 kick drum’s voice circuit and a study of its conceptual antecedents, highlighting the contributions of hobbyists and hackers, circuit theorists, and commercial instrument designers. This analysis reveals that while some aspects of the TR-808’s voice circuits are unremarkable, other aspects related to time-varying pitch shifts are unique and betray a deep understanding of traditional instrument acoustics. This investigation offers one answer to the question: Why does the 808 sound so good?!”.

Speaker Bio

Dr. Kurt James Werner is a Lecturer in Audio at the Sonic Arts Research Centre (SARC) of Queen’s University Belfast, where he joined the faculty of Arts, English, and Languages in early 2017. As a researcher, he studies theoretical aspects of Wave Digital Filters and other virtual analog topics, computer modeling of circuit-bent instruments, and the history of music technology. As part of his Ph.D. in Computer-Based Music Theory and Acoustics from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA), he wrote a doctoral dissertation entitled “Virtual Analog Modeling of Audio Circuitry Using Wave Digital Filters.” This proposed a number of new techniques for modeling audio circuitry, greatly expanding the class of circuits that can be modeled using the Wave Digital Filter approach to include circuits with complicated topologies and multiple nonlinear electrical elements. As a composer of electro-acoustic/acousmatic music, his music references elements of chiptunes, musique concrète, circuit bending, algorithmic/generative composition, and breakbeat.

Augusto Sarti: Plenacoustic Capturing and Rendering – A New Paradigm for Immersive Audio

When: Thursday 29th November 2018 @ 5:10 PM

Where: Atrium, Alison House, 12 Nicolson Square

Title: Plenacoustic Capturing and Rendering – A New Paradigm for Immersive Audio

Speaker: Augusto Sarti (Dipartimento di Elettronica, Informazione e Bioingegeria (DEIB), Politecnico di Milano, Italy)

Abstract

Acoustic signal processing traditionally relies on divide-and-conquer strategies (e.g. Plane-Wave Decomposition) applied to descriptors such as the acoustic pressure, which are scalar functions of space and time, hence the name “Space-Time Audio Processing”. A plethora of applications have been developed, which use arrays or spatial distributions of microphones and/or loudspeakers or cluster thereof. The typical applications are localizing/tracking, characterizing and extracting acoustic sources; as well as estimating, processing and rendering acoustic fields. Such applications, however, are often hindered by the inherent limits of geometrical acoustics; by the far-field hypothesis of Fourier acoustics; and by the adverse acoustics of everyday environments.

In this talk I will discuss viable alternatives to traditional approaches to space-time processing of acoustic fields, based on alternate formulations and representations of soundfields. I will first discuss how the geometric and acoustic properties of the environment’s reflectors can be estimated and even used for boosting space-time audio processing algorithms. I will then introduce a soundfield representation that uses descriptors that are defined in the so-called “ray space” and show how this can lead to applications such as interactive soundfield modeing, processing and rendering. I will finally discuss how to rethink our signal decomposition strategy by introducing a novel wave-field decomposition methodology based on Gabor frames, which is more suitable for local (in the space-time domain) representations. Based on this new framework for computational acoustics, I will introduce the ray-space transform and show how it can be used for efficiently and effectively approaching a far wider range of problems than source source separation and extraction, pushing the boundaries of environment’s inference; object-based manipulation of acoustic wavefields; interactive acoustics; and more.

Speaker Bio

Ph.D. in information engineering (1993) from the University of Padua, Italy, with a joint program with UC Berkeley, focusing on nonlinear system theory. With the Politecnico di Milano since 1993, where he is now a full professor. Double affiliation with UC Davis from 2013 to 2017, as a full professor. Research interests in the area of digital signal processing, with focus on space-time audio processing, sound analysis, synthesis and processing, image analysis, and 3D vision. Main contributions in the area of sound synthesis (nonlinear wave digital filters); of space-time audio processing (plenacoustic processing, visibility-based interactive acoustic modeling, geometry-based acoustic scene reconstruction; soundfield rendering, etc.); nonlinear system theory (Volterra system inversion, Wave Digital circuit simulation); computer vision; image analysis and processing. Senior Member of IEEE, Senior Area Editor of IEEE Signal Processing Letters, Associate Editor of IEEE Tr. on Audio Speech and Language Processing. Member elect of the IEEE Audio and Acoustic Signal Processing TC, founding member of the European Acoustics Association (EAA) TC on Audio Signal Processing. Chairman of the EURASIP Special Area Team of Acoustic, Sound and Music Signal Processing. Member elect of the Board of Directors of EURASIP.

Stefania Serafin: Sonic interaction design for cultural heritage

When: Wednesday 4th April, 2018 @ 5:10 PM

Where: Lecture Theatre B, James Clerk Maxwell Building, Kings Buildings, University of Edinburgh

Seminar Title

Sonic interaction design for cultural heritage

Seminar Speaker

Stefania Serafin (Aalborg University, Copenhagen, Denmark)

Seminar Abstract

Sonic interaction design is a relatively novel discipline at the intersection of interaction design and sound and music computing. In this talk I will present several examples of the use of sonic interaction design for reconstructing unusual musical instruments recreating their sonorities using physics based sound models, as well as reconstructing their physical interface using modern fabrication techniques and sensors technologies.

Speaker Bio

Stefania Serafin is professor of sound for multimodal environments at Aalborg University Copenhagen. She has previously been assistant and associate professor at the same university. She received a PhD from Stanford University in 2004. She is the president of the Sound and Music Computing Association and the project leader of the newly establish Nordic Sound and Music Computing network: https://nordicsmc.create.aau.dk/

Colin Gough: Violin acoustics – An introduction and recent developments

When: Wednesday 5th April, 2017 @ 5:10 PM

Where: Lecture Room A (A2.04), Alison House, University of Edinburgh

Seminar Title

Violin acoustics – An introduction and recent developments

Seminar Speaker

Prof Colin Gough (Professor Emeritus at the School of Physics and Astronomy, University of Birmingham)

Seminar Abstract

A brief introduction will first be given to the acoustics of the violin, including an overview of recent advances in measurements and understanding of the acoustical characteristic of many fine Italian instruments from the times of Stradivari and Guarneri . This will be followed by an overview of a model for the vibro-acoustic modes of the violin and related instruments, treating the violin as a shallow, thin-walled, guitar-shaped, box-like, shell structure, with arched orthotropic top and back plates supported around their edges by thin ribs. The modes of such a structure are computed using finite element software as a quasi-experimental tool, varying the physical properties of the component sub-structures over a much wide range than is physically possible by the violin maker. This elucidates their individual roles in determining the acoustic properties over the whole radiating frequency range up to 10 kHz.

Speaker Bio

Colin Gough is an Emeritus Professor of Physics at the University of Birmingham (UK), where he researched the quantum wave-mechanical, ultrasonic and microwave properties of both normal and high temperature superconductors. As a “weekend” professional violinist, Musical Acoustics has always been an added interest – publishing papers on various aspects of violin acoustics, teaching and supervising courses and projects for Physics students and more recently at the annual Oberlin Violin Acoustics Workshops. He contributed Chapters on Musical Acoustics and The Electric guitar and violin for Springer’s Handbook of Acoustics and The Science of String Instruments.

Federico Fontana: What do we play in the piano? (and) What in the piano do we hear?

When: Wednesday 29th March, 2017 @ 5:10 PM

Where: Atrium, Alison House (Nicolson Square), University of Edinburgh

Seminar Title

What do we play in the piano? (and) What in the piano do we hear?

Seminar Speaker

Federico Fontana (Department of Mathematics, Computer Science, and Physics, University of Udine, Italy)

Seminar Abstract

Both the versatility of the piano’s keyboard-based interface, and the peculiarity of its sound generation mechanism, make it an extremely interesting musical instrument from a research viewpoint. A successful audio engine, capable of synthesising state-of-the-art piano sounds in real time, will be presented. An overview of the development process that led to an industrial product will also be outlined. Research questions follow, about what pianists perceive during playing. Partial answers to these questions have come from recent research, now in press, about the role of tactile feedback during piano playing, and about the (yet to be understood) role of kinesthetic and visual feedback in the auditory lateralisation of piano tones.

Speaker Bio

Federico Fontana received the Laurea degree in electronic engineering from the University of Padova, Italy, in 1996 and the Ph.D. degree in computer science from the University of Verona, Italy, in 2003. During the Ph.D. degree studies, he was a Research Consultant in the design and realisation of real-time audio DSP systems. He is currently an Associate Professor in the Department of Mathematics and Computer Science and Physics, University of Udine, Italy, teaching audio-tactile interactions, computer networks and object-oriented programming. In 2001, he was Visiting Scholar at the Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, Espoo, Finland. During summer 2017 he has visited the ICST at the Zurich University of the Arts. His current interests are in interactive sound processing methods and in the design and evaluation of musical interfaces. Professor Fontana coordinated the EU project 222107 NIW under the FP7 ICT-2007.8.0 FET-Open call from 2008 to 2011. He is an Associate Editor of the IEEE Transactions on Audio, Speech, and Language Processing. He coordinates the PhD program in Computer Science, Mathematics and Physics at his university. He is Secretary of the Italian Association of Musical Informatics.

Gadi Sassoon: Advanced Sound Synthesis and Professional Studio Practice: A Working Composer’s Perspective on Usability and Parameter Mapping

When: Wednesday 3rd Febuary, 2016 @ 5:30 PM

Where: Atrium, Alison House (Reid School of Music), University of Edinburgh

Seminar Title

Advanced Sound Synthesis and Professional Studio Practice: A Working Composer’s Perspective on Usability and Parameter Mapping. 

Seminar Speaker(s)

Gadi Sassoon (Ninja Tune)

Seminar Abstract

Contemporary composers and sound artists enjoy unprecedented choice in tools. In the last 15 years fast CPUs and a fertile consumer market have made advanced DSP processes widely available out of the box and in real-time on generic hardware. Intuitive UIs make once esoteric operations commonplace. Specialised hardware still offers powerful synth engines and interesting tactile surfaces. Apps for smartphones and tablets provide fully capable engines for a few pounds. Countless bottom-up projects keep launching new instruments and interfaces. The burgeoning Eurorack modular scene is bringing fresh, weird and wonderful synthesis ideas to life.

But what what makes a good instrument, particularly a synthesis based one? Is it accessibility or power? Flexibility or immediacy? Are these aspects necessarily at odds? What matters most to the musicians that spend their lives mastering these tools? And are there gaps that still need to be filled? The propositions of this seminar will focus on the key mediating ingredients of any synthesis based tool: interface and parameter mapping. It will aim to provide some perspective on the balance between usability and depth of operation from the point of view of the end user.

An overview of advanced tools will be presented, with specifics from the speaker’s own practice, together with a discussion of their pros and cons in the context of a professional composer’s routine. The aim will be to present what’s lost and what’s gained in different design approaches to synthesis based instruments, and suggest considerations that programmers, engineers and designers can make to empower professional users.

Speaker Bio

Gadi Sassoon is a composer working for Just Isn’t Music, Ninja Tune’s publishing arm, specialised in electronic music and orchestral writing. He holds a BA in Music Synthesis and Composition from the Berklee College of Music and an MA in Sonic Arts from Middlesex University. Sassoon has published several solo records as Memory9, and regularly collaborates with high profile recording acts in the UK and the US. His work has been heard on games, TV shows, adverts and trailers worldwide. He has realised sound installations for museums and galleries in Italy, the US, the UK, Germany, Spain and China, and has lectured at Berklee, Milan’s European Design Institute and the Shanghai Normal University.

Thomas Hélie: New tools for modelling musical systems and exploring musical sound

When: Wednesday 20th January, 2016 @ 5:10 PM

Where: TBC, Informatics Forum, University of Edinburgh

Seminar Title

New tools for modelling musical systems and exploring musical sound. 

Seminar Speaker(s)

Dr. Thomas Hélie (Project-Team S3: Sound Signals and Systems, Laboratory of Sciences and Technologies for Music and Sound, IRCAM-CNRS-UPMC, Paris, France).

Seminar Abstract

In this presentation, I will present a few scientific and technological tools that we develop to model physical systems and to process sound signals. The introduction will be devoted to a short description of some activities of the Project-Team S3 in physical modelling, nonlinear systems, robotics, sound synthesis and sound analysis. Then, selected works will be deepened.

  1. Port-Hamiltonian systems: sound synthesis based on the guaranteed-passive simulation of physical models.
  2. Fractional filters: modelling and simulation of a class of low-pass filters, the slope of which can be continuously tuned from 0 (unit gain) to -6 decibel per octave (1 pole filter).
  3. SnailAnalyser: a frequency-domain analyser that delivers in real-time an intuitive representation of sounds based on the chromatic alignment of spectral active zones.

Sound examples and demonstrations will be given along this presentation.

Speaker Bio

Thomas Hélie received the Dipl. Ing. degree from the Ecole Nationale Supérieure des Télécommunications de Bretagne (1997), the M.S. degree in acoustics, signal processing, and informatics applied to music, Université Pierre et Marie Curie (1999), the M.S. degree in control theory and signal processing from the Université Paris-Sud (1999), and the Ph.D. degree in control theory and signal processing of Université Paris-Sud (2002). After a postdoctoral research in the Laboratory of Nonlinear System at the Swiss Federal Institute of Lausanne, Lausanne, Switzerland (2003) and a Lecturer position at Université Paris-Sud (2004), he has been, since 2004, Researcher at the National Research Council (CNRS) in the Analysis/Synthesis Team of the Laboratory of Sciences and Technologies for Music and Sound, IRCAM-CNRS-UPMC, Paris. He passed his “Habilitation à Diriger des Recherches”, Université Pierre et Marie Curie (2013) and founded the Project-Team S3 “Sound Signals and Systems” (2015). His research topics include audio processing, physics of musical instruments, physical modeling, nonlinear dynamical systems, and inversion processes.