2025
Autores
Orouji, Amir Abbas; Carvalho, Nadia; Sá Pinto, António; Bernardes, Gilberto;
Publicação
Abstract
Phrase segmentation is a fundamental preprocessing step for computational folk music similarity, specifically in identifying tune families within digital corpora. Furthermore, recent literature increasingly recognizes the need for tradition-specific frameworks that accommodate the structural idiosyncrasies of each tradition. In this context, this study presents a culturally informed adaptation of the established rule-based Local Boundary Detection Model (LBDM) algorithm to underrepresented Iberian folk repertoires. Our methodological enhancement expands the LBDM baseline, which traditionally analyzes rests, pitch intervals, and inter-onset duration functions to identify potential segmentation boundaries, by integrating a sub-structure surface repetition function coupled with an optimized peak-selection algorithm. Furthermore, we implement a genetic algorithm to maximize segmentation accuracy by weighting coefficients for each function while calibrating the meta-parameters of the peak-selection process. Empirical evaluation on the I-Folk digital corpus, comprising 802 symbolically encoded folk melodies from Portuguese and Spanish traditions, demonstrates improvements in segmentation F-measure of six and sixteen percentage points~(p.p.) relative to established baseline methodologies for Portuguese and Spanish repertoires, respectively.
2025
Autores
Barboza, JR; Bernardes, G; Magalhães, E;
Publicação
2025 Immersive and 3D Audio: from Architecture to Automotive (I3DA)
Abstract
Music production has long been characterized by well-defined concepts and techniques. However, a notable gap exists in applying these established principles to music production within immersive media. This paper addresses this gap by examining post-production processes applied to three case studies, i.e., three songs with unique instrumental features and narratives. The primary objective is to facilitate an in-depth analysis of technical and artistic challenges in musical production for immersive media. From a detailed analysis of technical and artistic post-production decisions in the three case studies and a critical examination of theories and techniques from sound design and music production, we propose a framework with a tripartite mixing categorization for immersive media: Traditional Production, Expanded Traditional Production, and Nontraditional Production. These concepts expand music production methodologies in the context of immersive media, offering a framework for understanding the complexities of spatial audio. By exploring these interdisciplinary connections, we aim to enrich the discourse surrounding music production, rethinking its conceptual plane into more integrative media practices outside the core music production paradigm, thus contributing to developing innovative production methodologies. © 2025 IEEE.
2025
Autores
Gea, Daniel; Bernardes, Gilberto;
Publicação
Abstract
Building on theories of human sound perception and spatial cognition, this paper introduces a sonification method that facilitates navigation by auditory cues. These cues help users recognize objects and key urban architectural elements, encoding their semantic and spatial properties using non-speech audio signals. The study reviews advances in object detection and sonification methodologies, proposing a novel approach that maps semantic properties (i.e., material, width, interaction level) to timbre, pitch, and gain modulation and spatial properties (i.e., distance, position, elevation) to gain, panning, and melodic sequences. We adopt a three-phase methodology to validate our method. First, we selected sounds to represent the object’s materials based on the acoustic properties of crowdsourced annotated samples. Second, we conducted an online perceptual experiment to evaluate intuitive mappings between sounds and object semantic attributes. Finally, in-person navigation experiments were conducted in virtual reality to assess semantic and spatial recognition. The results demonstrate a notable perceptual differentiation between materials, with a global accuracy of .69 ± .13 and a mean navigation accuracy of .73 ± .16, highlighting the method’s effectiveness. Furthermore, the results suggest a need for improved associations between sounds and objects and reveal demographic factors that are influential in the perception of sounds.
2025
Autores
Santos, Natália; Bernardes, Gilberto;
Publicação
Abstract
Music therapy has emerged as a promising approach to support various mental health conditions, offering non-pharmacological therapies with evidence of improved well-being. Rapid advancements in artificial intelligence (AI) have recently opened new possibilities for ‘personalized’ musical interventions in mental health care. This article explores the application of AI in the context of mental health, focusing on the use of machine learning (ML), deep learning (DL), and generative music (GM) to personalize musical interventions. The methodology included a scoping review in the Scopus and PubMed databases, using keywords denoting emerging AI technologies, music-related contexts, and application domains within mental health and well-being. Identified research lines encompass the analysis and generation of emotional patterns in music using ML, DL, and GM techniques to create musical experiences adapted to user needs. The results highlight that these technologies effectively promote emotional and cognitive well-being, enabling personalized interventions that expand mental health therapies.
2025
Autores
Braga, F; Bernardes, G; Dannenberg, RB; Correia, N;
Publicação
PROCEEDINGS OF THE THIRTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2025)
Abstract
This paper describes an approach to algorithmic music composition that takes narrative structures as input, allowing composers to create music directly from narrative elements. Creating narrative development in music remains a challenging task in algorithmic composition. Our system addresses this by combining leitmotifs to represent characters, generative grammars for harmonic coherence, and evolutionary algorithms to align musical tension with narrative progression. The system operates at different scales, from overall plot structure to individual motifs, enabling both autonomous composition and co-creation with varying degrees of user control. Evaluation with compositions based on tales demonstrated the system's ability to compose music that supports narrative listening and aligns with its source narratives, while being perceived as familiar and enjoyable.
2025
Autores
Rodriguez, JF; Bernardes, G;
Publicação
PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON DIGITAL LIBRARIES FOR MUSICOLOGY, DLFM 2025
Abstract
Folk music and particularly children's folk songs serve as vital repositories of cultural identity, emotional expression, and social values. This study presents a computational thematic analysis of Portuguese and Spanish children's folk songs using the I-Folk corpus, comprising 800 annotated entries in the Music Encoding Initiative (MEI) format. Despite shared historical influences on the Iberian Peninsula, the lyrical content of each tradition reveals distinct thematic orientations. Through a methodological framework that combines traditional text pre-processing, frequency analysis, and semantic embedding using large language models (LLMs), we uncover cross-cultural similarities and divergences in content, form, and emotional register. Spanish lyrics focus primarily on caregiving, emotional development, and moral-religious motifs, while Portuguese songs emphasize performative rhythm, localized identity, and folkloric references. Our results highlight the need for tailored analytical strategies when working with children's repertoire and demonstrate the utility of LLMs in capturing culturally embedded patterns that are often obscured in conventional analyses. This work contributes to digital folklore scholarship, corpus-based ethnomusicology, and the preservation of underrepresented cultural expressions in computational humanities.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.