Li Su: AI and recent developments in music information retrieval
When: Wednesday 9th October 2019 @ 5:10 PM
Where: The Atrium (G.10), Alison House, 12 Nicholson Sq, University of Edinburgh
Title:Â AI and recent developments in music information retrieval
Speakers: Dr Li Su (Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan), Dr Yu-Fen Huang (University of Edinburgh), and Tsung-Ping Chen (Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan)
Abstract
In this talk, we will discuss how to apply deep learning approaches to several challenging tasks in music information retrieval (MIR) including automatic music transcription, musical body movement analysis, and automatic chord recognition. Automatic music transcription (AMT) refers to the process of converting music recordings to symbolic representations. Since music transcription is by no means easy even for human, AMT has been one of the core challenges in MIR. Thanks to the recent advance of computing power and deep learning, more and more AMT solutions are becoming applicable in real-world cases. In this talk, we will first discuss the issues of solving the AMT problem from either signal processing and machine learning aspects. Then, we will have an introduction on the proposed solutions in transcribing piano music, singing voice, and non-Western music. Possible application, challenges and future research directions will also be discussed.
Computational musicology is an appealing scenario to apply MIR techniques. In this talk, the potential to perform music analysis using computational and deep learning approaches is discussed. Our recent work analyses musical movement and identifies highlighted features in music orchestral conducting movement using Recurrent Neural Network (RNN). Our work applies deep learning approaches to model the chords and harmony in tonal music will also be introduced.
Speaker Bio
Dr Li Su received the B. S. degree on electronic engineering and mathematics in National Taiwan University in 2008, and the Ph. D. degree on communication engineering in National Taiwan University in 2012. He had served as a postdoctoral research fellow in the Center of Information and Technology Innovation, Academia Sinica, from 2012 to 2016. Since 2017, he has served as an Assistant Research Fellow in the Institute of Information Science, Academia Sinica. His research has been highly focused on signal processing and machine learning for music information retrieval. His ongoing projects include automatic music transcription, style transfer, AI-based music-visual animation, etc. He has been a technical committee member in the International Society of Music Information Retrieval (ISMIR) since 2014.
Dr Yu-Fen Huang is a post-doctoral research fellow at Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan. Her Ph.D. research at University of Edinburgh (UK) collaborated with supervisors in Music and in Sport Science, and applied biomechanics and motion capture technology to music orchestral conducting movement analysis. She has a M.A. in Musicology (National Taiwan University), and possesses a B.A. in Music (National Taiwan Normal University). Her current research applies Recurrent Neural Network to explore the stylish features in different orchestral conductors’ conducting.
Tsung-Ping Chen is a research assistant at Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan. His research applies deep learning approaches to model the chords and harmony in tonal music, e.g. automatic chord recognition and chord generation. He possesses a M.A. in Musicology (National Taiwan University), where his project study the correlation between music and human physiology. He also has a B.S. in Engineering Science.