Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2022

Assessing the Influence of Multimodal Feedback in Mobile-Based Musical Task Performance

Authors
Clement, A; Bernardes, G;

Publication
MULTIMODAL TECHNOLOGIES AND INTERACTION

Abstract
Digital musical instruments have become increasingly prevalent in musical creation and production. Optimizing their usability and, particularly, their expressiveness, has become essential to their study and practice. The absence of multimodal feedback, present in traditional acoustic instruments, has been identified as an obstacle to complete performer-instrument interaction in particular due to the lack of embodied control. Mobile-based digital musical instruments present a particular case by natively providing the possibility of enriching basic auditory feedback with additional multimodal feedback. In the experiment presented in this article, we focused on using visual and haptic feedback to support and enrich auditory content to evaluate the impact on basic musical tasks (i.e., note pitch tuning accuracy and time). The experiment implemented a protocol based on presenting several musical note examples to participants and asking them to reproduce them, with their performance being compared between different multimodal feedback combinations. Collected results show that additional visual feedback was found to reduce user hesitation in pitch tuning, allowing users to reach the proximity of desired notes in less time. Nonetheless, neither visual nor haptic feedback was found to significantly impact pitch tuning time and accuracy compared to auditory-only feedback.

2022

Emotional machines: Toward affective virtual environments

Authors
Forero, J; Bernardes, G; Mendes, M;

Publication
MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia

Abstract
Emotional Machines is an interactive installation that builds affective virtual environments through spoken language. In response to the existing limitations of emotion recognition models incorporating computer vision and electrophysiological activity, whose sources are hindered by a head-mounted display, we propose the adoption of speech emotion recognition (from the audio signal) and semantic sentiment analysis. In detail, we use two machine learning models to predict three main emotional categories from high-level semantic and low-level speech features. Output emotions are mapped to audiovisual representation by an end-To-end process. We use a generative model of chord progressions to transfer speech emotion into music and a synthesized image from the text (transcribed from the user's speech). The generated image is used as the style source in the style-Transfer process onto an equirectangular projection image target selected for each emotional category. The installation is an immersive virtual space encapsulating emotions in spheres disposed into a 3D environment. Thus, users can create new affective representations or interact with other previous encoded instances using joysticks. © 2022 Owner/Author.

2022

Leveraging compatibility and diversity in computer-aided music mashup creation

Authors
Bernardo, G; Bernardes, G;

Publication
Personal and Ubiquitous Computing

Abstract
AbstractWe advance Mixmash-AIS, a multimodal optimization music mashup creation model for loop recombination at scale. Our motivation is to (1) tackle current scalability limitations in state-of-the-art (brute force) computational mashup models while enforcing the (2) compatibility of audio loops and (3) a pool of diverse mashups that can accommodate user preferences. To this end, we adopt the artificial immune system (AIS) opt-aiNet algorithm to efficiently compute a population of compatible and diverse music mashups from loop recombinations. Optimal mashups result from local minima in a feature space representing harmonic, rhythmic, and spectral musical audio compatibility. We objectively assess the compatibility, diversity, and computational performance of Mixmash-AIS generated mashups compared to a standard genetic algorithm (GA) and a brute force (BF) approach. Furthermore, we conducted a perceptual test to validate the objective evaluation function within Mixmash-AIS in capturing user enjoyment of the computer-generated loop mashups. Our results show that while the GA stands as the most efficient algorithm, the AIS opt-aiNet outperforms both the GA and BF approaches in terms of compatibility and diversity. Our listening test has shown that Mixmash-AIS objective evaluation function significantly captures the perceptual compatibility of loop mashups (p < .001).

2022

FluidHarmony: Defining an equal-tempered and hierarchical harmonic lexicon in the Fourier space

Authors
Bernardes, G; Carvalho, N; Pereira, S;

Publication
JOURNAL OF NEW MUSIC RESEARCH

Abstract
FluidHarmony is an algorithmic method for defining a hierarchical harmonic lexicon in equal temperaments. It utilizes an enharmonic weighted Fourier transform space to represent pitch class set (pcsets) relations. The method ranks pcsets based on user-defined constraints: the importance of interval classes (ICs) and a reference pcset. Evaluation of 5,184 Western musical pieces from the 16th to 20th centuries shows FluidHarmony captures 8% of the corpus's harmony in its top pcsets. This highlights the role of ICs and a reference pcset in regulating harmony in Western tonal music while enabling systematic approaches to define hierarchies and establish metrics beyond 12-TET.

2022

MID-LEVEL HARMONIC AUDIO FEATURES FOR MUSICAL STYLE CLASSIFICATION

Authors
Almeida, F; Bernardes, G; Weiû, C;

Publication
Proceedings of the 23rd International Society for Music Information Retrieval Conference, ISMIR 2022

Abstract
The extraction of harmonic information from musical audio is fundamental for several music information retrieval tasks. In this paper, we propose novel harmonic audio features based on the perceptually-inspired tonal interval vector space, computed as the Fourier transform of chroma vectors. Our contribution includes mid-level features for musical dissonance, chromaticity, dyadicity, triadicity, diminished quality, diatonicity, and whole-toneness. Moreover, we quantify the perceptual relationship between short- and long-term harmonic structures, tonal dispersion, harmonic changes, and complexity. Beyond the computation on fixed-size windows, we propose a context-sensitive harmonic segmentation approach. We assess the robustness of the new harmonic features in style classification tasks regarding classical music periods and composers. Our results align with, slightly outperforming, existing features and suggest that other musical properties than those in state-of-the-art literature are partially captured. We discuss the features regarding their musical interpretation and compare the different feature groups regarding their effectiveness for discriminating classical music periods and composers. © F. Almeida, G. Bernardes, and C. Weiû.

2022

Medical rescuers’ occupational health during COVID-19: Contribution of coping and emotion regulation on burnout, trauma and post-traumatic growth

Authors
Fonseca, SM; Cunha, S; Campos, R; Faria, S; Silva, M; Ramos, MJ; Azevedo, G; Barbosa, AR; Queirós, C;

Publication
Análise Psicológica

Abstract
The COVID-19 pandemic places unique challenges to medical rescuers’ occupational health. Thus, it is crucial to assess its direct and indirect impacts on key psychological outcomes and adaptation strategies. This study aims to analyse the impact of this pandemic on medical rescuers’ coping and emotion regulation strategies, and their levels of work-related psychological outcomes, such as burnout, trauma and post-traumatic growth. Additionally, it aims to analyse the contribution of coping and emotion regulation strategies, employed to manage the COVID-19 pandemic, on burnout, trauma and post-traumatic growth. A sample of 111 medical rescuers answered the Brief Cope, Emotion Regulation Questionnaire, Oldenburg Burnout Inventory, Impact of Event Scale-Revised and Post-Traumatic Growth Inventory. Medical rescuers resorted moderately to coping and emotion regulation strategies, since the beginning of COVID-19. They presented moderate burnout and post-traumatic growth and low trauma. Coping presented a higher weight on burnout, trauma and post-traumatic growth, than emotion regulation. Expressive suppression and dysfunctional coping predicted burnout and trauma, and problem and emotion-focused coping predicted post-traumatic growth. Dysfunctional coping mediated and, thus, exacerbated the effect of expressive suppression on burnout and on trauma. Practitioners should pay closer attention to professionals with higher burnout and trauma. Occupational practices should focus on reducing dysfunctional coping and expressive suppression and promoting problem-focused coping.

  • 84
  • 371