Category Archives: General Interest

Romain Michon: Embedded Real-Time Audio DSP With the Faust Programming Language

When: Wednesday 6th November 2019 @ 5:10 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Embedded Real-Time Audio DSP With the Faust Programming Language

Speakers: Dr Romain Michon (CCRMA, Stanford + GRAME-CNCM, Lyon, France)

Abstract

Faust is a Domain-Specific programming Language (DSL) for real-time audio Digital Signal Processing (DSP). The Faust compiler can generate code in various lower-level programming languages (e.g., C, C++, LLVM, Rust, WebAssembly, etc.) from high-level DSP specifications. Generated code can be embedded in wrappers to add specific features to it (e.g., MIDI, polyphony, OSC, etc.) and to turn it into ready-to-use objects (e.g., audio plugins, standalones, mobile apps, web apps, etc.). More recently, Faust has been used a lot in the context of low-level audio embedded systems programming such as microcontrollers, bare-metal on the Raspberry Pi, FPGAs, etc. Optimizations are made for specific processor architectures (e.g., use of intrinsics, etc.) but hidden from the user to keep the programming experience as smooth and as easy as possible. After giving a quick introduction to Faust, we’ll present an overview of the work that has been made by the Faust team around embedded systems for audio. We’ll then present ongoing and future projects around this topic.

Speaker Bio

Romain Michon is a full-time researcher at GRAME-CNCM (Lyon, France) and a researcher and lecturer at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University (USA). He has been involved in the development of the Faust programming language since 2008 and he’s now part of the core Faust development team at GRAME. Beside that, Romain’s research interests involve embedded systems for real-time audio processing, Human Computer Interaction (HCI), New Interfaces for Musical Expression (NIME), and physical modeling of musical instruments.

Li Su: AI and recent developments in music information retrieval

When: Wednesday 9th October 2019 @ 5:10 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: AI and recent developments in music information retrieval

Speakers: Dr Li Su (Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan), Dr Yu-Fen Huang (University of Edinburgh), and Tsung-Ping Chen (Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan)

Abstract

In this talk, we will discuss how to apply deep learning approaches to several challenging tasks in music information retrieval (MIR) including automatic music transcription, musical body movement analysis, and automatic chord recognition. Automatic music transcription (AMT) refers to the process of converting music recordings to symbolic representations. Since music transcription is by no means easy even for human, AMT has been one of the core challenges in MIR. Thanks to the recent advance of computing power and deep learning, more and more AMT solutions are becoming applicable in real-world cases. In this talk, we will first discuss the issues of solving the AMT problem from either signal processing and machine learning aspects. Then, we will have an introduction on the proposed solutions in transcribing piano music, singing voice, and non-Western music. Possible application, challenges and future research directions will also be discussed.

Computational musicology is an appealing scenario to apply MIR techniques. In this talk, the potential to perform music analysis using computational and deep learning approaches is discussed. Our recent work analyses musical movement and identifies highlighted features in music orchestral conducting movement using Recurrent Neural Network (RNN). Our work applies deep learning approaches to model the chords and harmony in tonal music will also be introduced.

Speaker Bio

Dr Li Su received the B. S. degree on electronic engineering and mathematics in National Taiwan University in 2008, and the Ph. D. degree on communication engineering in National Taiwan University in 2012. He had served as a postdoctoral research fellow in the Center of Information and Technology Innovation, Academia Sinica, from 2012 to 2016. Since 2017, he has served as an Assistant Research Fellow in the Institute of Information Science, Academia Sinica. His research has been highly focused on signal processing and machine learning for music information retrieval. His ongoing projects include automatic music transcription, style transfer, AI-based music-visual animation, etc. He has been a technical committee member in the International Society of Music Information Retrieval (ISMIR) since 2014.

Dr Yu-Fen Huang is a post-doctoral research fellow at Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan. Her Ph.D. research at University of Edinburgh (UK) collaborated with supervisors in Music and in Sport Science, and applied biomechanics and motion capture technology to music orchestral conducting movement analysis. She has a M.A. in Musicology (National Taiwan University), and possesses a B.A. in Music (National Taiwan Normal University). Her current research applies Recurrent Neural Network to explore the stylish features in different orchestral conductors’ conducting.

Tsung-Ping Chen is a research assistant at Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan. His research applies deep learning approaches to model the chords and harmony in tonal music, e.g. automatic chord recognition and chord generation. He possesses a M.A. in Musicology (National Taiwan University), where his project study the correlation between music and human physiology. He also has a B.S. in Engineering Science.

Chris Buchanan: Singing Synthesis

When: Wednesday 2nd October 2019 @ 5:10 PM

Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh

Title: Singing Synthesis

Speaker: Chris Buchanan (Cereproc, Edinburgh, UK)

Abstract

In the last two years speech synthesis technology has changed beyond recognition. Being able to create seamless copies of voices is a reality, and the manipulation of voice quality using synthesis techniques can now produce dynamic audio content that is impossible to differentiate from natural spoken output. Speech synthesis is also vocal editing software. It will allow us to create artificial singing that is better than many human singers, graft expressive techniques from one singer to another, and using analysis-by-synthesis categorise and evaluate singing far beyond simple pitch estimation. How we apply and interface with this technology in the musical domain is at a cusp. It is the audio engineering community that will, in the end, dictate how these new techniques are incorporated into the music technology of the future. In this talk we approach this expanding field in a modern context, give some examples, delve into the multi-faceted nature of singing user interface and obstacles still to overcome, illustrate a novel avenue of singing modification, and discuss the future trajectory of this powerful technology from Text-to-Speech to music and audio engineering platforms.

Speaker Bio

Chris Buchanan is Audio Development Engineer for CereProc. He graduated in the Acoustics & Music Technology MSc here at the University of Edinburgh in 2016, after 3 years as a signal processing geophysicist at the French seismic imaging company CGG. He also holds a BSc degree in Mathematics from the same university. As a freelancer, Chris has been involved with the core DSP driving technology offered by the likes of Goodhertz, Signum Audio in their professional dynamics monitoring suite, and published research on 3D audio in collaboration with the Acoustics & Audio Group here. His research interests focus mainly on structural modelling/synthesis of the human voice andreal-time 3D audio synthesis via structural modelling of the Head-Related Transfer Function. More recently he’s taken on the challenge of singing synthesis, helping produce one of the world’s first Text-to-Singing (TTS) systems and thus enabling any voice to sing.

Nick Collins: Musical Machine Listening in the Web Browser


When: Wednesday 14th November 2018 @ 5:10 PM

Where: Atrium, Alison House, 12 Nicolson Square

Title: Musical Machine Listening in the Web Browser


Speaker: Prof Nick Collins (Durham University)

Abstract

In this seminar, experiments so far with Web Audio API based live audio analysis will be demoed and discussed. The new Musical Machine Listening Library (MMLL) for javascript will be introduced, as well as the current MIMIC (Musically Intelligent Machines Interacting Creatively) joint AHRC funded project between Goldsmiths, Sussex and Durham universities. A minimal code starting point for machine listening work in the browser will be explained, and I will demonstrate some more involved experiments in browser based auditory modelling, onset detection, beat tracking based audio cutting and the like.

Speaker Bio

Nick Collins is a Professor in the Durham University Music Department with strong interests in artificial intelligence techniques applied within music, the computer and programming languages as musical instrument, and the history and practice of electronic music. He is a frequent international performer as composer-programmer-pianist or codiscian, from algoraves to electronic chamber music. Many research papers and much code and music are available from www.composerprogrammer.com

Chris Pike: The Next Generation of Broadcast Audio

When: Wednesday 1st November, 2017 @ 6:00 PM

Where: Atrium (G.10), Alison House, Nicolson Square, University of Edinburgh

Seminar Title

The Next Generation of Broadcast Audio

Seminar Speaker

Chris Pike (BBC Research and Development, Salford)

Seminar Abstract

This talk will cover the work of the audio team in BBC Research & Development and the activity of the BBC Audio Research Partnership. Our aim is to keep the BBC at the cutting edge of audio technology to enable new experiences for audiences and ensure continued high quality services. We have a focus on new immersive, personalised and accessible content, enabled by so-called next-generation audio systems, including object-based audio. The talk will cover our recent efforts in academic research collaboration, standardisation and innovative production trials for programmes such as Doctor Who and Planet Earth II.

Speaker Bio

Chris leads the audio research team at BBC R&D, which focusses on the technology for immersive and personalised listening experiences. He leads the BBC Audio Research Partnership, through which the BBC collaborates with world-class universities on audio research, and is active in industry bodies such as the ITU and the EBU. His work has seen the BBC using spatial audio with some of its biggest programme brands, such as the Proms, Planet Earth and Doctor Who. He is also a PhD candidate in the Audio Lab at the University of York, investigating the quality of binaural audio systems.

Colin Gough: Violin acoustics – An introduction and recent developments

When: Wednesday 5th April, 2017 @ 5:10 PM

Where: Lecture Room A (A2.04), Alison House, University of Edinburgh

Seminar Title

Violin acoustics – An introduction and recent developments

Seminar Speaker

Prof Colin Gough (Professor Emeritus at the School of Physics and Astronomy, University of Birmingham)

Seminar Abstract

A brief introduction will first be given to the acoustics of the violin, including an overview of recent advances in measurements and understanding of the acoustical characteristic of many fine Italian instruments from the times of Stradivari and Guarneri . This will be followed by an overview of a model for the vibro-acoustic modes of the violin and related instruments, treating the violin as a shallow, thin-walled, guitar-shaped, box-like, shell structure, with arched orthotropic top and back plates supported around their edges by thin ribs. The modes of such a structure are computed using finite element software as a quasi-experimental tool, varying the physical properties of the component sub-structures over a much wide range than is physically possible by the violin maker. This elucidates their individual roles in determining the acoustic properties over the whole radiating frequency range up to 10 kHz.

Speaker Bio

Colin Gough is an Emeritus Professor of Physics at the University of Birmingham (UK), where he researched the quantum wave-mechanical, ultrasonic and microwave properties of both normal and high temperature superconductors. As a “weekend” professional violinist, Musical Acoustics has always been an added interest – publishing papers on various aspects of violin acoustics, teaching and supervising courses and projects for Physics students and more recently at the annual Oberlin Violin Acoustics Workshops. He contributed Chapters on Musical Acoustics and The Electric guitar and violin for Springer’s Handbook of Acoustics and The Science of String Instruments.

David Antony Reid: The Luthier’s Approach Guitar Making

When: Wednesday 15th March, 2017 @ 5:10 PM

Where: Room 4.31/4.33, Informatics Forum, University of Edinburgh

Seminar Title

The Luthier’s Approach Guitar Making

Seminar Speaker

David Antony Reid (David Antony Reid Guitars, Perth, Scotland)

Seminar Abstract

At a seminar for the students and researchers of The University of Edinburgh, through demonstrations, examples of nature and the people who inspired him, David hopes to appeal to the scientific community to consider in detail some of the handmade musical instruments he shall display. He will explain why a scientific approach has been so difficult to pursue in lutherie, and why the guitar as an instrument has barely changed since its conception, in what is an extremely conservative market. David will discuss some of his own innovations, and will encourage audience participation and debate as to their scientific validity. His perspective draws on the famous words of Richard Feynman: “The test of all knowledge is experiment. Experiment is the sole judge of scientific ‘truth’ “. A demonstration of his diffraction slits shall be given, as will a demonstration of tuning wood itself, and the different sounds different materials produce. He will outline why he feels that a blend of wood needs to be used in current guitars, and also why we may need to change the materials guitar makers use in the future. The talk shall be coupled with a slide show and full explanation of the mechanical function of, and inspiration behind, the integral components of his instruments. A Q&A shall follow with the opportunity for any guitarists to try-out one of David’s multi award winning instruments.

Speaker Bio

David Antony Reid is a true “hand-maker” of bespoke, contemporary and innovative stringed, fretted instruments. He uses a mixture of traditional, self developed, and scientifically-based construction methods. As a multi award winning luthier, David takes up to 400 hours to craft and sculpt each of his guitars – an approach employed due to strong feelings regarding a constant touch-and-feel understanding of his materials and overall construction method. David has, in past times, developed what he feels to be a more energy efficient, multi reflection guitar back and sides design, increasing reverberation times; a multi modal guitar top, separating most frequencies delivered from the fretboard/strings in a more balanced manner; and diffraction slits around the peripheral of his vaulted backed guitars, further allowing for complete and utter control over bass, mids, and treble – within the realms of the given size of the instrument – when opening or closing different configurations of said slits. As well as these progressions, David has also developed a few other ergonomic and energy efficient innovations for the steel strung acoustic guitar, a musical instrument that has barely changed in a commercial sense for over 180 years. Forced in to retirement after 19 years of lutherie, as he is now suffering severe work-related repertory issues, instead of selling and exhibiting his wares world wide, David now wishes to venture in to the real scientific acoustic research of what is the world’s second most popular musical instrument (after the human voice!).

Claudia Fritz: Stradivarius – Myth or Reality?

When: Wednesday 22nd February, 2017 @ 5:10 PM

Where: Room 4.31/4.33, Informatics Forum, University of Edinburgh

Seminar Title

Stradivarius – Myth or Reality?

Seminar Speaker

Claudia Fritz (Sorbonne Universités, UPMC Univ Paris 06, CNRS)

Seminar Abstract

Old Italian violins from the 18th century are so famous that Stradivarius has entered common language. Who has never heard of this Cremonese violin maker, and the astronomical prices reached by his instruments at widely broadcast auctions? Players are loquacious about their amazing qualities and consider that the superiority of many of these instruments remains unrivalled. However, numerous blind listening tests since the beginning of the 20th century have shown a preference for new violins. These tests have been criticised for being too informal, not rigorous enough, and in listening rather than playing conditions. After all, who can judge the instruments better than the players themselves? Scientific studies involving blind playing tests were thus conducted. The aim was to explore whether, when the identity of the violins was not revealed, old violins were still preferred and could be distinguished from their new counterparts. The results speak for themselves …

In addition, some correlations between perceptual evaluations and acoustical measurements as well as future developments using motion capture and bridges instrumented with piezoelectric sensors will be briefly presented.

Speaker Bio

After a PhD in cotutelle between France and Australia, and a post-doc at the University of Cambridge (UK), Claudia Fritz has been a CNRS-researcher at Institut Jean le Rond d’Alembert, at University Pierre and Marie Curie in Paris, since 2009. Her main research interest is to correlate the mechanical properties of musical (bowed string) instruments with their perceptual properties, as evaluated by players and listeners. This involves all kinds of perceptual tests, as well as physical measurements to characterise the interaction of the players with their instruments (for instance speed, movement and force of the bow relative to the instrument) and vibro-acoustical measurements to characterise the response of the instruments that are studied. She was awarded the prestigious Bronze medal from her employer (CNRS) in 2016 for her recent work consisting in double-blind studies involving new and old Italian violins, which had gained widespread international attention.

Annalisa Bonfiglio: Large-area, multi-modal sensing platforms based on organic transistors

When: Tuesday 7th February, 2016 @ 4:30 PM

Where: Lecture Theatre A, James Clerk Maxwell Building, University of Edinburgh

Seminar Title

Large-area, multi-modal sensing platforms based on organic transistors

Seminar Speaker

Prof. Annalisa Bonfiglio (Department of Electrical and Electronic Engineering, University of Cagliari)

Seminar Abstract

The ability of organic semiconductors to be electrically conductive while maintaining, at the same time, the structural and mechanical properties of organic materials, may be successfully exploited for realising electronic devices and circuits that are flexible, transparent, and light-weight. In addition, they do not need expensive equipment for their fabrication.

All these features make organic devices the ideal candidates for being employed in sensing systems for various physical-chemical parameters, in applications ranging from wearable electronics to robotic interfaces. The focus of the seminar will be in particular on the main challenges, for materials and device architectures, faced by these kinds of applications. In particular, a novel device for multi-modal sensing based on the charge sensing ability of field effect transistors will be shown.

Speaker Bio

Annalisa Bonfiglio received the Laurea Degree in Physics from the University of Genoa, in 1991 and the Ph.D. in Bioengineering at the Politecnico di Milano, in 1995. She is currently a Full Professor of Electronic Bioengineering at the University of Cagliari, Italy. She also serves as Vice-Rector for Innovation and Territorial Strategies. She is also a member of the Institute of Nanoscience of CNR (National Research Council). She is author of more than 130 papers on international journals, conference proceedings, book chapters, and 8 patents. Her main research interests are related to innovative materials and electronic devices for wearable electronics and bioengineering. She is involved in several international and national research projects, focussed on applications of sensors and biosensors based on organic electronics technology.

Jonathan Hargreaves: Auralising rooms from computer models

When: Tuesday 4th October, 2016 @ 5:10 PM

Where: Room 4.31/4.33, Informatics Forum, University of Edinburgh

Seminar Title

Auralising rooms from computer models: Seeking synergies between geometric and wave-based techniques

Seminar Speaker

Dr. Jonathan Hargreaves (Acoustics Research Centre, University of Salford).

Seminar Abstract

Auralisation of a space requires measured or simulated data covering the full audible frequency spectrum. For numerical simulation this is extremely challenging, since that bandwidth covers many octaves in which the wavelength changes from being large with respect to features of the space to being comparatively much smaller. Hence the most efficient way of describing acoustic propagation changes from wave descriptions at low frequencies to geometric ray and sound-beam energy descriptions at high frequencies, and these differences are reflected in the disparate classes of algorithms that are applied. Geometric propagation assumptions yield efficient algorithms, but the maximum accuracy they can achieve is limited by how well the geometric assumption represents sound propagation in a given scenario; this comprises their accuracy at low frequencies in particular. Methods that directly model wave effects are more accurate but have a computational cost that scales with problem size and frequency, thereby limiting them to small or low frequency scenarios. Hence it is often necessary to operate two algorithms in parallel handling the different bandwidths. Due to their differing formulations however, combing the output data can be a rather arbitrary process.

This talk will attempt to address this disparity by examining synergies between the Boundary Element Method (BEM) and geometric approaches. Specifically, it will focus on how the use of appropriately chosen oscillatory basis functions in BEM can produce leading-order geometric behaviour at high frequencies. How this produces synergies that might ultimately lead to a single unified full-bandwidth algorithm for early-time will be discussed.

Speaker Bio

Jonathan was awarded an MEng in Engineering & Computing Science from the University of Oxford in 2000 and a PhD in Acoustic Engineering from the University of Salford in 2007, where he remains as a Lecturer in Acoustics, Audio and Broadcast Engineering. The title of his PhD thesis was “Time Domain Boundary Element Methods for Room Acoustics” and this remains influential in his current research areas. Jonathan has had the pleasure of being involved in a wide variety of public engagement activities, including a number of TV appearances, and is passionate about performing, engineering and enjoying live music. He was awarded the UK Institute of Acoustics’ Tyndall Medal in September 2016.