Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Matthew Davies

2014

Syncopation creates the sensation of groove in synthesized music examples

Authors
Sioros, G; Miron, M; Davies, M; Gouyon, F; Madison, G;

Publication
FRONTIERS IN PSYCHOLOGY

Abstract
In order to better understand the musical properties which elicit an increased sensation of wanting to move when listening to music groove we investigate the effect of adding syncopation to simple piano melodies, under the hypothesis that syncopation is correlated to groove. Across two experiments we examine listeners' experience of groove to synthesized musical stimuli covering a range of syncopation levels and densities of musical events, according to formal rules implemented by a computer algorithm that shifts musical events from strong to weak metrical positions. Results indicate that moderate levels of syncopation lead to significantly higher groove ratings than melodies without any syncopation or with maximum possible syncopation. A comparison between the various transformations and the way they were rated shows that there is no simple relation between syncopation magnitude and groove.

2016

Psychoacoustic Approaches for Harmonic Music Mixing

Authors
Gebhardt, RB; Davies, MEP; Seeber, BU;

Publication
APPLIED SCIENCES-BASEL

Abstract
The practice of harmonic mixing is a technique used by DJs for the beat-synchronous and harmonic alignment of two or more pieces of music. In this paper, we present a new harmonic mixing method based on psychoacoustic principles. Unlike existing commercial DJ-mixing software, which determines compatible matches between songs via key estimation and harmonic relationships in the circle of fifths, our approach is built around the measurement of musical consonance. Given two tracks, we first extract a set of partials using a sinusoidal model and average this information over sixteenth note temporal frames. By scaling the partials of one track over +/- 6 semitones (in 1/8th semitone steps), we determine the pitch-shift that maximizes the consonance of the resulting mix. For this, we measure the consonance between all combinations of dyads within each frame according to psychoacoustic models of roughness and pitch commonality. To evaluate our method, we conducted a listening test where short musical excerpts were mixed together under different pitch shifts and rated according to consonance and pleasantness. Results demonstrate that sensory roughness computed from a small number of partials in each of the musical audio signals constitutes a reliable indicator to yield maximum perceptual consonance and pleasantness ratings by musically-trained listeners.

2013

AutoMashUpper: An Automatic Multi-Song Mashup System

Authors
Davies, MEP; Hamel, P; Yoshii, K; Goto, M;

Publication
Proceedings of the 14th International Society for Music Information Retrieval Conference, ISMIR 2013, Curitiba, Brazil, November 4-8, 2013

Abstract

2014

Real-time percussive beat tracking

Authors
Robertson, A; Davies, M; Stark, A;

Publication
Proceedings of the AES International Conference

Abstract
We present a real-time percussive beat tracking algorithm for synchronisation within live music. A percussive detection function that represents the percussive component of the audio input is created by an efficient method for median filtering. Dynamic programming techniques are used to predict the beat locations and update a cumulative beat function. It is possible to use the percussive component of the spectrogram to create functions which correlate with kick and snare events, thereby generating a prediction of drum pattern events.

2014

AutoMashUpper: Automatic Creation of Multi-Song Music Mashups

Authors
Davies, MEP; Hamel, P; Yoshii, K; Goto, M;

Publication
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING

Abstract
In this paper we present a system, AutoMashUpper, for making multi-song music mashups. Central to our system is a measure of "mashability" calculated between phrase sections of an input song and songs in a music collection. We define mashability in terms of harmonic and rhythmic similarity and a measure of spectral balance. The principal novelty in our approach centres on the determination of how elements of songs can be made fit together using key transposition and tempo modification, rather than based on their unaltered properties. In this way, the properties of two songs used to model their mashability can be altered with respect to transformations performed to maximize their perceptual compatibility. AutoMashUpper has a user interface to allow users to control the parameterization of the mashability estimation. It allows users to define ranges for key shifts and tempo as well as adding, changing or removing elements from the created mashups. We evaluate AutoMashUpper by its ability to reliably segment music signals into phrase sections, and also via a listening test to examine the relationship between estimated mashability and user enjoyment.

2014

Multi-Feature Beat Tracking

Authors
Zapata, JR; Davies, MEP; Gomez, E;

Publication
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING

Abstract
A recent trend in the field of beat tracking for musical audio signals has been to explore techniques for measuring the level of agreement and disagreement between a committee of beat tracking algorithms. By using beat tracking evaluation methods to compare all pairwise combinations of beat tracker outputs, it has been shown that selecting the beat tracker which most agrees with the remainder of the committee, on a song-by-song basis, leads to improved performance which surpasses the accuracy of any individual beat tracker used on its own. In this paper we extend this idea towards presenting a single, standalone beat tracking solution which can exploit the benefit of mutual agreement without the need to run multiple separate beat tracking algorithms. In contrast to existing work, we re-cast the problem as one of selecting between the beat outputs resulting from a single beat tracking model with multiple, diverse input features. Through extended evaluation on a large annotated database, we show that our multi-feature beat tracker can outperform the state of the art, and thereby demonstrate that there is sufficient diversity in input features for beat tracking, without the need for multiple tracking models.

  • 3
  • 10