Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Gilberto Bernardes holds a Ph.D. in Digital Media (2014) by the Universidade do Porto under the auspices of the University of Texas at Austin and a Master of Music 'cum Lauda' (2008) by Amsterdamse Hogeschool voor de Kunsten. Bernardes is currently an Assistant Professor at the Universidade do Porto and a Senior Researcher at the INESC TEC where he leads the Sound and Music Computing Lab. He counts with more than 90 publications, of which 14 are articles in peer-reviewed journals with a high impact factor (mostly Q1 and Q2 in Scimago) and fourteen chapters in books. Bernardes interacted with 152 international collaborators in co-authoring scientific papers. Bernardes has been continuously contributing to the training of junior scientists, as he is currently supervising six Ph.D. thesis and concluded 40+ Master dissertations.


He received nine awards, including the Fraunhofer Portugal Prize for the best Ph.D. thesis and several best paper awards at conferences (e.g., DCE and CMMR). He has participated in 12 R&D projects as a senior and junior researcher. In the past eight years, following his PhD defense, Bernardes was able to attract competitive funding to conduct a post-doctoral project funded by FCT and an exploratory grant for a market-based R&D prototype. Currently, he is leading the Portuguese team (Work Package leader) at INESC TEC on the Horizon Europe project EU-DIGIFOLK, and the Erasmus+ project Open Minds. His latest contribution focuses on cognitive-inspired tonal music representations and sound synthesis In his artistic activities, Bernardes has performed in some distinguished music venues such as Bimhuis, Concertgebouw, Casa da Música, Berklee College of Music, New York University, and Seoul Computer Music Festival.

Interest
Topics
Details

Details

  • Name

    Gilberto Bernardes Almeida
  • Role

    Senior Researcher
  • Since

    14th July 2014
005
Publications

2025

Qualia Motion in Fourier Space: Formalizing Linear, Nondirected and Contrapuntal Ambiguity in Schoenberg's Op. 19, No. 1

Authors
Pereira, S; Bernardes, G; Martins, JO;

Publication
Music Theory Spectrum

Abstract
Abstract In this article, we formalize and analyze qualia motion, i.e., the process by which a composition transitions across distinct harmonic qualities through the Fourier qualia space (FQS)—a multidimensional and transposition-independent space based on the discrete Fourier transform (DFT) coefficients’ magnitude. In the FQS, the plot of set classes relies on their harmonic qualities—such as diatonicity and octatonicity—enabling us to (1) identify the pitch-class set in a musical phrase that best represents its qualia—a reference sonority; (2) define a harmonic progression using all sequential reference sonorities in a piece; (3) visualize trajectory in space; and (4) establish a statistical metric for the ambiguity of harmonic qualia. Finally, we discuss Schoenberg's Op. 19, No. 1, analyzing the sense of its harmonic path. The proposed space leverages a bipartite, symmetrical, and consequential structure and unveils ambiguity as an element of nondirected linearity and counterpoint.

2025

Motiv: A Dataset of Latent Space Representations of Musical Phrase Motions

Authors
Carvalho, N; Sousa, J; Bernardes, G; Portovedo, H;

Publication
Proceedings of the 20th International Audio Mostly Conference

Abstract
This paper introduces Motiv, a dataset of expert saxophonist recordings illustrating parallel, similar, oblique, and contrary motions. These motions are variations of three phrases from Jesús Villa-Rojo's "Lamento,"with controlled similarities. The dataset includes 116 audio samples recorded by four tenor saxophonists, each annotated with descriptions of motions, musical scores, and latent space vectors generated using the VocalSet RAVE model. Motiv enables the analysis of motion types and their geometric relationships in latent spaces. Our preliminary dataset analysis shows that parallel motions align closely with original phrases, while contrary motions exhibit the largest deviations, and oblique motions show mixed patterns. The dataset also highlights the impact of individual performer nuances. Motiv supports a variety of music information retrieval (MIR) tasks, including gesture-based recognition, performance analysis, and motion-driven retrieval. It also provides insights into the relationship between human motion and music, contributing to real-time music interaction and automated performance systems. © 2025 Copyright held by the owner/author(s).

2025

Explicit Tonal Tension Conditioning via Dual-Level Beam Search for Symbolic Music Generation

Authors
Ebrahimzadeh, Maral; Bernardes, Gilberto; Stober, Sebastian;

Publication

Abstract
State-of-the-art symbolic music generation models have recently achieved remarkable output quality, yet explicit control over compositional features, such as tonal tension, remains challenging. We propose a novel approach that integrates a computational tonal tension model, based on tonal interval vector analysis, into a Transformer framework. Our method employs a two-level beam search strategy during inference. At the token level, generated candidates are re-ranked using model probability and diversity metrics to maintain overall quality. At the bar level, a tension-based re-ranking is applied to ensure that the generated music aligns with a desired tension curve. Objective evaluations indicate that our approach effectively modulates tonal tension, and subjective listening tests confirm that the system produces outputs that align with the target tension. These results demonstrate that explicit tension conditioning through a dual-level beam search provides a powerful and intuitive tool to guide AI-generated music. Furthermore, our experiments demonstrate that our method can generate multiple distinct musical interpretations under the same tension condition.

2025

Toward Musicologically-Informed Retrieval: Enhancing MEI with Computational Metadata

Authors
Carvalho, Nádia; Bernardes, Gilberto;

Publication

Abstract
We present a metadata enrichment framework for Music Encoding Initiative (MEI) files, featuring mid- to higher-level multimodal features to support content-driven (similarity) retrieval with semantic awareness across large collections. While traditional metadata captures basic bibliographic and structural elements, it often lacks the depth required for advanced retrieval tasks that rely on musical phrases, form, key or mode, idiosyncratic patterns, and textual topics. To address this, we propose a system that fosters the computational analysis and edition of MEI encodings at scale. Inserting extended metadata derived from computational analysis and heuristic rules lays the groundwork for more nuanced retrieval tools. A batch environment and a lightweight JavaScript web-based application propose a complementary workflow by offering large-scale annotations and an interactive environment for reviewing, validating, and refining MEI files' metadata. Development is informed by user-centered methodologies, including consultations with music editors and digital musicologists, and has been co-designed in the context of orally transmitted folk music traditions, ensuring that both the batch processes and interactive tools align with scholarly and domain-specific needs.

2025

Computational Phrase Segmentation of Iberian Folk Traditions: An Optimized LBDM Model

Authors
Orouji, Amir Abbas; Carvalho, Nadia; Sá Pinto, António; Bernardes, Gilberto;

Publication

Abstract
Phrase segmentation is a fundamental preprocessing step for computational folk music similarity, specifically in identifying tune families within digital corpora. Furthermore, recent literature increasingly recognizes the need for tradition-specific frameworks that accommodate the structural idiosyncrasies of each tradition. In this context, this study presents a culturally informed adaptation of the established rule-based Local Boundary Detection Model (LBDM) algorithm to underrepresented Iberian folk repertoires. Our methodological enhancement expands the LBDM baseline, which traditionally analyzes rests, pitch intervals, and inter-onset duration functions to identify potential segmentation boundaries, by integrating a sub-structure surface repetition function coupled with an optimized peak-selection algorithm. Furthermore, we implement a genetic algorithm to maximize segmentation accuracy by weighting coefficients for each function while calibrating the meta-parameters of the peak-selection process. Empirical evaluation on the I-Folk digital corpus, comprising 802 symbolically encoded folk melodies from Portuguese and Spanish traditions, demonstrates improvements in segmentation F-measure of six and sixteen percentage points~(p.p.) relative to established baseline methodologies for Portuguese and Spanish repertoires, respectively.