Cookies Policy
We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out More
Close
  • Menu
Interest
Topics
Details

Details

Publications

2016

A multi-level tonal interval space for modelling pitch relatedness and musical consonance

Authors
Bernardes, G; Cocharro, D; Caetano, M; Guedes, C; Davies, MEP;

Publication
JOURNAL OF NEW MUSIC RESEARCH

Abstract
In this paper we present a 12-dimensional tonal space in the context of the Tonnetz, Chew's Spiral Array, and Harte's 6-dimensional Tonal Centroid Space. The proposed Tonal Interval Space is calculated as the weighted Discrete Fourier Transform of normalized 12-element chroma vectors, which we represent as six circles covering the set of all possible pitch intervals in the chroma space. By weighting the contribution of each circle (and hence pitch interval) independently, we can create a space in which angular and Euclidean distances among pitches, chords, and regions concur with music theory principles. Furthermore, the Euclidean distance of pitch configurations from the centre of the space acts as an indicator of consonance.

2016

Computer-aided musical orchestration using an artificial immune system

Authors
Abreu, J; Caetano, M; Penha, R;

Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
The aim of computer-aided musical orchestration is to find a combination of musical instrument sounds that approximates a target sound. The difficulty arises from the complexity of timbre perception and the combinatorial explosion of all possible instrument mixtures. The estimation of perceptual similarities between sounds requires a model capable of capturing the multidimensional perception of timbre, among other perceptual qualities of sounds. In this work, we use an artificial immune system (AIS) called opt-aiNet to search for combinations of musical instrument sounds that minimize the distance to a target sound encoded in a fitness function. Opt-aiNet is capable of finding multiple solutions in parallel while preserving diversity, proposing alternative orchestrations for the same target sound that are different among themselves. We performed a listening test to evaluate the subjective similarity and diversity of the orchestrations. © Springer International Publishing Switzerland 2016.

2016

Full-Band Quasi-Harmonic Analysis and Synthesis of Musical Instrument Sounds with Adaptive Sinusoids

Authors
Caetano, M; Kafentzis, GP; Mouchtaris, A; Stylianou, Y;

Publication
APPLIED SCIENCES-BASEL

Abstract
Sinusoids are widely used to represent the oscillatory modes of musical instrument sounds in both analysis and synthesis. However, musical instrument sounds feature transients and instrumental noise that are poorly modeled with quasi-stationary sinusoids, requiring spectral decomposition and further dedicated modeling. In this work, we propose a full-band representation that fits sinusoids across the entire spectrum. We use the extended adaptive Quasi-Harmonic Model (eaQHM) to iteratively estimate amplitude- and frequency-modulated (AM-FM) sinusoids able to capture challenging features such as sharp attacks, transients, and instrumental noise. We use the signal-to-reconstruction-error ratio (SRER) as the objective measure for the analysis and synthesis of 89 musical instrument sounds from different instrumental families. We compare against quasi-stationary sinusoids and exponentially damped sinusoids. First, we show that the SRER increases with adaptation in eaQHM. Then, we show that full-band modeling with eaQHM captures partials at the higher frequency end of the spectrum that are neglected by spectral decomposition. Finally, we demonstrate that a frame size equal to three periods of the fundamental frequency results in the highest SRER with AM-FM sinusoids from eaQHM. A listening test confirmed that the musical instrument sounds resynthesized from full-band analysis with eaQHM are virtually perceptually indistinguishable from the original recordings.

2015

Automatic Generation of Chord Progressions with an Artificial Immune System

Authors
Navarro, M; Caetano, M; Bernardes, G; de Castro, LN; Manuel Corchado, JM;

Publication
EVOLUTIONARY AND BIOLOGICALLY INSPIRED MUSIC, SOUND, ART AND DESIGN (EVOMUSART 2015)

Abstract
Chord progressions are widely used in music. The automatic generation of chord progressions can be challenging because it depends on many factors, such as the musical context, personal preference, and aesthetic choices. In this work, we propose a penalty function that encodes musical rules to automatically generate chord progressions. Then we use an artificial immune system (AIS) to minimize the penalty function when proposing candidates for the next chord in a sequence. The AIS is capable of finding multiple optima in parallel, resulting in several different chords as appropriate candidates. We performed a listening test to evaluate the chords subjectively and validate the penalty function. We found that chords with a low penalty value were considered better candidates than chords with higher penalty values.

2015

earGram Actors: An Interactive Audiovisual System Based on Social Behavior

Authors
Beyls, P; Bernardes, G; Caetano, M;

Publication
JOURNAL OF SCIENCE AND TECHNOLOGY OF THE ARTS

Abstract
In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. In turn, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A user-performer thus improvises within the behavioral scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output.

Supervised
thesis

2015

Orquestração Musical Usando um Sistema Imunitário Artificial

Author
José Miguel Lima de Abreu

Institution
UP-FEUP