Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

I am a Senior Researcher in CTM where I coordinate the Sound and Music Computing research group. My main area of research is in the application of digital signal processing and machine learning techniques to music information retrieval (MIR). My primary research interest has been in the automatic extraction of rhythmic structure from music signals, however I have also undertaken research on evaluation methodologies, music therapy, sparse signal processing methods, object based coding of music and the analysis of groove.

My current activity, as an FCT Investigator, is on the emerging field of creative-MIR, where I am exploring techniques for the perception and measurement of music compatibility for automatic music remixing and recombination. I am also an Associate Editor for IEEE/ACM Transactions on Audio Speech and Language Processing.

Interest
Topics
Details

Details

  • Name

    Matthew Davies
  • Role

    Senior Researcher
  • Since

    18th April 2011
003
Publications

2021

On Filter Generalization for Music Bandwidth Extension Using Deep Neural Networks

Authors
Sulun, S; Davies, MEP;

Publication
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING

Abstract
In this paper, we address a subtopic of the broad domain of audio enhancement, namely musical audio bandwidth extension. We formulate the bandwidth extension problem using deep neural networks, where a band-limited signal is provided as input to the network, with the goal of reconstructing a full-bandwidth output. Our main contribution centers on the impact of the choice of low-pass filter when training and subsequently testing the network. For two different state-of-the-art deep architectures, ResNet and U-Net, we demonstrate that when the training and testing filters are matched, improvements in signal-to-noise ratio (SNR) of up to 7 dB can be obtained. However, when these filters differ, the improvement falls considerably and under some training conditions results in a lower SNR than the band-limited input. To circumvent this apparent overfitting to filter shape, we propose a data augmentation strategy which utilizes multiple low-pass filters during training and leads to improved generalization to unseen filtering conditions at test time.

2020

TIV.lib: an open-source library for the tonal description of musical audio

Authors
Ramires, A; Bernardes, G; Davies, MEP; Serra, X;

Publication
CoRR

Abstract
In this paper, we present TIV.lib, an open-source library for the content-based tonal description of musical audio signals. Its main novelty relies on the perceptually-inspired Tonal Interval Vector space based on the Discrete Fourier transform, from which multiple instantaneous and global representations, descriptors and metrics are computed-e.g., harmonic change, dissonance, diatonicity, and musical key. The library is cross-platform, implemented in Python and the graphical programming language Pure Data, and can be used in both online and offline scenarios. Of note is its potential for enhanced Music Information Retrieval, where tonal descriptors sit at the core of numerous methods and applications.

2019

Temporal convolutional networks for musical audio beat tracking

Authors
Davies, MEP; Böck, S;

Publication
European Signal Processing Conference

Abstract
We propose the use of Temporal Convolutional Networks for audio-based beat tracking. By contrasting our convolutional approach with the current state-of-the-art recurrent approach using Bidirectional Long Short-Term Memory, we demonstrate three highly promising attributes of TCNs for music analysis, namely: i) they achieve state-of-the-art performance on a wide range of existing beat tracking datasets, ii) they are well suited to parallelisation and thus can be trained efficiently even on very large training data; and iii) they require a small number of weights. © 2019 IEEE

2019

Seed: Resynthesizing environmental sounds from examples

Authors
Bernardes, G; Aly, L; Davies, MEP;

Publication
SMC 2016 - 13th Sound and Music Computing Conference, Proceedings

Abstract
In this paper we present SEED, a generative system capable of arbitrarily extending recorded environmental sounds while preserving their inherent structure. The system architecture is grounded in concepts from concatenative sound synthesis and includes three top-level modules for segmentation, analysis, and generation. An input audio signal is first temporally segmented into a collection of audio segments, which are then reduced into a dictionary of audio classes by means of an agglomerative clustering algorithm. This representation, together with a concatenation cost between audio segment boundaries, is finally used to generate sequences of audio segments with arbitrarily long duration. The system output can be varied in the generation process by the simple and yet effective parametric control over the creation of the natural, temporally coherent, and varied audio renderings of environmental sounds. Copyright: © 2016 First author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

2019

Tapping Along to the Difficult Ones: Leveraging User-Input for Beat Tracking in Highly Expressive Musical Content

Authors
Pinto, AS; Davies, MEP;

Publication
Perception, Representations, Image, Sound, Music - 14th International Symposium, CMMR 2019, Marseille, France, October 14-18, 2019, Revised Selected Papers

Abstract
We explore the task of computational beat tracking for musical audio signals from the perspective of putting an end-user directly in the processing loop. Unlike existing “semi-automatic” approaches for beat tracking, where users may select from among several possible outputs to determine the one that best suits their aims, in our approach we examine how high-level user input could guide the manner in which the analysis is performed. More specifically, we focus on the perceptual difficulty of tapping the beat, which has previously been associated with the musical properties of expressive timing and slow tempo. Since musical examples with these properties have been shown to be poorly addressed even by state of the art approaches to beat tracking, we re-parameterise an existing deep learning based approach to enable it to more reliably track highly expressive music. In a small-scale listening experiment we highlight two principal trends: i) that users are able to consistently disambiguate musical examples which are easy to tap to and those which are not; and in turn ii) that users preferred the beat tracking output of an expressive-parameterised system to the default parameterisation for highly expressive musical excerpts. © 2021, Springer Nature Switzerland AG.

Supervised
thesis

2022

High-Assurance, High-Speed Post-Quantum Cryptography in Safe Rust

Author
Leonardo Fernandes Moura

Institution
UP-FEUP

2022

CNN-LSTM-based models to predict the heart rate using PPG signal from wearables during physical exercise

Author
Lucas Tomás Martins Ribeiro

Institution
UP-FEUP

2021

An optimization framework to estimate the active and reactive power flexibility in the TSO-DSO interface

Author
João Pedro Vasques Vieira da Silva

Institution
UP-FEUP

2021

Exploring Azure: Internet of Things and Edge

Author
Rui Alexandre Farinha Fernandes Balau

Institution
UP-FCUP

2017

Content-Based Creative Manipulation of Music Signals

Author
António Humberto Sá Pinto

Institution
UP-FEUP