2022
Authors
Clement, A; Bernardes, G;
Publication
MULTIMODAL TECHNOLOGIES AND INTERACTION
Abstract
Digital musical instruments have become increasingly prevalent in musical creation and production. Optimizing their usability and, particularly, their expressiveness, has become essential to their study and practice. The absence of multimodal feedback, present in traditional acoustic instruments, has been identified as an obstacle to complete performer-instrument interaction in particular due to the lack of embodied control. Mobile-based digital musical instruments present a particular case by natively providing the possibility of enriching basic auditory feedback with additional multimodal feedback. In the experiment presented in this article, we focused on using visual and haptic feedback to support and enrich auditory content to evaluate the impact on basic musical tasks (i.e., note pitch tuning accuracy and time). The experiment implemented a protocol based on presenting several musical note examples to participants and asking them to reproduce them, with their performance being compared between different multimodal feedback combinations. Collected results show that additional visual feedback was found to reduce user hesitation in pitch tuning, allowing users to reach the proximity of desired notes in less time. Nonetheless, neither visual nor haptic feedback was found to significantly impact pitch tuning time and accuracy compared to auditory-only feedback.
2022
Authors
Forero, J; Bernardes, G; Mendes, M;
Publication
MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia
Abstract
Emotional Machines is an interactive installation that builds affective virtual environments through spoken language. In response to the existing limitations of emotion recognition models incorporating computer vision and electrophysiological activity, whose sources are hindered by a head-mounted display, we propose the adoption of speech emotion recognition (from the audio signal) and semantic sentiment analysis. In detail, we use two machine learning models to predict three main emotional categories from high-level semantic and low-level speech features. Output emotions are mapped to audiovisual representation by an end-To-end process. We use a generative model of chord progressions to transfer speech emotion into music and a synthesized image from the text (transcribed from the user's speech). The generated image is used as the style source in the style-Transfer process onto an equirectangular projection image target selected for each emotional category. The installation is an immersive virtual space encapsulating emotions in spheres disposed into a 3D environment. Thus, users can create new affective representations or interact with other previous encoded instances using joysticks. © 2022 Owner/Author.
2022
Authors
Bernardo, G; Bernardes, G;
Publication
Personal and Ubiquitous Computing
Abstract
2022
Authors
Bernardes, G; Carvalho, N; Pereira, S;
Publication
JOURNAL OF NEW MUSIC RESEARCH
Abstract
FluidHarmony is an algorithmic method for defining a hierarchical harmonic lexicon in equal temperaments. It utilizes an enharmonic weighted Fourier transform space to represent pitch class set (pcsets) relations. The method ranks pcsets based on user-defined constraints: the importance of interval classes (ICs) and a reference pcset. Evaluation of 5,184 Western musical pieces from the 16th to 20th centuries shows FluidHarmony captures 8% of the corpus's harmony in its top pcsets. This highlights the role of ICs and a reference pcset in regulating harmony in Western tonal music while enabling systematic approaches to define hierarchies and establish metrics beyond 12-TET.
2022
Authors
Almeida, F; Bernardes, G; Weiû, C;
Publication
Proceedings of the 23rd International Society for Music Information Retrieval Conference, ISMIR 2022
Abstract
The extraction of harmonic information from musical audio is fundamental for several music information retrieval tasks. In this paper, we propose novel harmonic audio features based on the perceptually-inspired tonal interval vector space, computed as the Fourier transform of chroma vectors. Our contribution includes mid-level features for musical dissonance, chromaticity, dyadicity, triadicity, diminished quality, diatonicity, and whole-toneness. Moreover, we quantify the perceptual relationship between short- and long-term harmonic structures, tonal dispersion, harmonic changes, and complexity. Beyond the computation on fixed-size windows, we propose a context-sensitive harmonic segmentation approach. We assess the robustness of the new harmonic features in style classification tasks regarding classical music periods and composers. Our results align with, slightly outperforming, existing features and suggest that other musical properties than those in state-of-the-art literature are partially captured. We discuss the features regarding their musical interpretation and compare the different feature groups regarding their effectiveness for discriminating classical music periods and composers. © F. Almeida, G. Bernardes, and C. Weiû.
2022
Authors
Fonseca, SM; Cunha, S; Campos, R; Faria, S; Silva, M; Ramos, MJ; Azevedo, G; Barbosa, AR; Queirós, C;
Publication
Análise Psicológica
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.