Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Gilberto Bernardes has a multifaceted activity as a musician, professor, and researcher in sound and music computing. He holds a Ph.D. in digital media from the University of Porto and a Master of Music, cum laude, from the Amsterdamse Hogeschool voor de Kunsten. His research agenda focuses on sampling-based synthesis techniques and pitch spaces, whose findings have been reported in over 60 scientific publications. His artistic activity counts with regular concerts in venues with recognized merit, such as Asia Culture Center (Korea); New York University (USA); Concertgebouw (Holland); and Casa da Música (Portugal). Bernardes is currently an Assistant Professor at the University of Porto and a senior researcher at INESC TEC.

Interest
Topics
Details

Details

003
Publications

2022

Acting emotions: physiological correlates of emotional valence and arousal dynamics in theatre

Authors
Aly, L; Bota, P; Godinho, L; Bernardes, G; Silva, H;

Publication
IMX 2022 - Proceedings of the 2022 ACM International Conference on Interactive Media Experiences

Abstract

2022

Assessing the Influence of Multimodal Feedback in Mobile-Based Musical Task Performance

Authors
Clement, A; Bernardes, G;

Publication
MULTIMODAL TECHNOLOGIES AND INTERACTION

Abstract
Digital musical instruments have become increasingly prevalent in musical creation and production. Optimizing their usability and, particularly, their expressiveness, has become essential to their study and practice. The absence of multimodal feedback, present in traditional acoustic instruments, has been identified as an obstacle to complete performer–instrument interaction in particular due to the lack of embodied control. Mobile-based digital musical instruments present a particular case by natively providing the possibility of enriching basic auditory feedback with additional multimodal feedback. In the experiment presented in this article, we focused on using visual and haptic feedback to support and enrich auditory content to evaluate the impact on basic musical tasks (i.e., note pitch tuning accuracy and time). The experiment implemented a protocol based on presenting several musical note examples to participants and asking them to reproduce them, with their performance being compared between different multimodal feedback combinations. Collected results show that additional visual feedback was found to reduce user hesitation in pitch tuning, allowing users to reach the proximity of desired notes in less time. Nonetheless, neither visual nor haptic feedback was found to significantly impact pitch tuning time and accuracy compared to auditory-only feedback. © 2022 by the authors.

2022

Emotional Machines

Authors
Forero, J; Bernardes, G; Mendes, M;

Publication
Proceedings of the 30th ACM International Conference on Multimedia

Abstract

2022

Leveraging compatibility and diversity in computer-aided music mashup creation

Authors
Bernardo, G; Bernardes, G;

Publication
Personal and Ubiquitous Computing

Abstract
AbstractWe advance Mixmash-AIS, a multimodal optimization music mashup creation model for loop recombination at scale. Our motivation is to (1) tackle current scalability limitations in state-of-the-art (brute force) computational mashup models while enforcing the (2) compatibility of audio loops and (3) a pool of diverse mashups that can accommodate user preferences. To this end, we adopt the artificial immune system (AIS) opt-aiNet algorithm to efficiently compute a population of compatible and diverse music mashups from loop recombinations. Optimal mashups result from local minima in a feature space representing harmonic, rhythmic, and spectral musical audio compatibility. We objectively assess the compatibility, diversity, and computational performance of Mixmash-AIS generated mashups compared to a standard genetic algorithm (GA) and a brute force (BF) approach. Furthermore, we conducted a perceptual test to validate the objective evaluation function within Mixmash-AIS in capturing user enjoyment of the computer-generated loop mashups. Our results show that while the GA stands as the most efficient algorithm, the AIS opt-aiNet outperforms both the GA and BF approaches in terms of compatibility and diversity. Our listening test has shown that Mixmash-AIS objective evaluation function significantly captures the perceptual compatibility of loop mashups (p < .001).

2021

AM-I-BLUES: An Interactive Digital Music Instrument for Guiding Novice Pianist in the Improvisation of Jazz Melodies

Authors
Corintha, I; Outeiro, L; Dias, R; Bernardes, G;

Publication
ADVANCES IN DESIGN, MUSIC AND ARTS, EIMAD 2020

Abstract

Supervised
thesis

2021

Generative Soundscapes for Enhanced Engagement in Non-Invasive Neurorehabilitation Treatment

Author
Aisha Animashaun

Institution
UP-FEUP

2021

Prototipagem de um instrumento musical misto: a expressividade da interface

Author
Henrique Gomes Ferreira

Institution
UP-FEUP

2021

Aumentando a experiência sonora do Maracatu a partir da tecnologia ubíqua

Author
Danielle da Silva Lopes

Institution
UP-FEUP

2021

CoDi: Leveraging Compatibility and Diversity in Computational Mashup Creation from Large Loop Collections

Author
Gonçalo Nuno Botelho Amaral Rolão Bernardo

Institution
UP-FEUP

2021

A Systematic Assessment of Musical Audio Rhythmic Compatibility

Author
Cláudio Fischer Lemos

Institution
UP-FEUP