Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Gilberto Bernardes é doutorado em Media Digitais (2014) pela Universidade do Porto sob os auspícios da Universidade do Texas em Austin e mestre em Música 'cum Lauda' (2008) pela Amsterdamse Hogeschool voor de Kunsten. Bernardes é atualmente Professor Auxiliar na Universidade do Porto e Investigador Sénior no INESC TEC onde lidera o Laboratório de Computação Sonora e Musical. Conta com mais de 90 publicações, das quais 14 são artigos em revistas com elevado fator de impacto (maioritariamente Q1 e Q2 na Scimago) e catorze capítulos de livros. A Bernardes interagiu com 152 colaboradores internacionais na coautoria de artigos científicos. Bernardes tem contribuído continuamente para a formação de jovens cientistas, uma vez que orienta atualmente seis teses de doutoramento e concluiu mais de 40 dissertações de mestrado.


Recebeu nove prémios, incluindo o Prémio Fraunhofer Portugal para a melhor tese de doutoramento e vários prémios de melhor artigo em conferências (e.g., DCE e CMMR). Participou em 12 projectos de I&D como investigador sénior e júnior. Nos últimos oito anos, após a defesa do seu doutoramento, Bernardes conseguiu atrair financiamento competitivo para realizar um projeto de pós-doutoramento financiado pela FCT e uma bolsa exploratória para um protótipo de I&D baseado no mercado. Atualmente, lidera a equipa portuguesa (Work Package leader) no INESC TEC no projeto Horizonte Europa EU-DIGIFOLK, e no projeto Erasmus+ Open Minds. Nas suas actividades artísticas, Bernardes tem actuado em algumas salas de música de renome, tais como Bimhuis, Concertgebouw, Casa da Música, Berklee College of Music, New York University, e Seoul Computer Music Festival.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Gilberto Bernardes Almeida
  • Cargo

    Investigador Sénior
  • Desde

    14 julho 2014
005
Publicações

2024

Acting Emotions: a comprehensive dataset of elicited emotions

Autores
Aly, L; Godinho, L; Bota, P; Bernardes, G; da Silva, HP;

Publicação
SCIENTIFIC DATA

Abstract
Emotions encompass physiological systems that can be assessed through biosignals like electromyography and electrocardiography. Prior investigations in emotion recognition have primarily focused on general population samples, overlooking the specific context of theatre actors who possess exceptional abilities in conveying emotions to an audience, namely acting emotions. We conducted a study involving 11 professional actors to collect physiological data for acting emotions to investigate the correlation between biosignals and emotion expression. Our contribution is the DECEiVeR (DatasEt aCting Emotions Valence aRousal) dataset, a comprehensive collection of various physiological recordings meticulously curated to facilitate the recognition of a set of five emotions. Moreover, we conduct a preliminary analysis on modeling the recognition of acting emotions from raw, low- and mid-level temporal and spectral data and the reliability of physiological data across time. Our dataset aims to leverage a deeper understanding of the intricate interplay between biosignals and emotional expression. It provides valuable insights into acting emotion recognition and affective computing by exposing the degree to which biosignals capture emotions elicited from inner stimuli.

2023

The Singing Bridge: Sonification of a Stress-Ribbon Footbridge

Autores
Torresan, C; Bernardes, G; Caetano, E; Restivo, T;

Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

Abstract
Stress-ribbon footbridges are often prone to excessive vibrations induced by environmental phenomena (e.g., wind) and human actions (e.g., walking). This paper studies a stress-ribbon footbridge at the Faculty of Engineering of the University of Porto (FEUP) in Portugal, where different degrees of vertical vibrations are perceptible in response to human actions. We adopt sonification techniques to create a sonic manifestation that shows the footbridge’s dynamic response to human interaction. Two distinct sonification techniques – audification and parameter mapping – are adopted to provide intuitive access to the footbridge dynamics from low-level acceleration data and higher-level spectral analysis. In order to evaluate the proposed sonification techniques in exposing relevant information about human actions on the footbridge, an online perceptual test was conducted to assess the understanding of the three following dimensions: 1) the number of people interacting with the footbridge, 2) their walking speed, and 3) the steadiness of their pace. The online perceptual test was conducted with and without a short training phase. Results of n= 23 participants show that parameter mapping sonification is more effective in promoting an intuitive understating of the footbridge dynamics compared to audification. Furthermore, when exposed to a short training phase, the participants’ perception improved in identifying the correct dimensions. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.

2023

Desiring Machines and Affective Virtual Environments

Autores
Forero, J; Bernardes, G; Mendes, M;

Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

Abstract
Language is closely related to how we perceive ourselves and signify our reality. In this scope, we created Desiring Machines, an interactive media art project that allows the experience of affective virtual environments adopting speech emotion recognition as the leading input source. Participants can share their emotions by speaking, singing, reciting poetry, or making any vocal sounds to generate virtual environments on the run. Our contribution combines two machine learning models. We propose a long-short term memory and a convolutional neural network to predict four main emotional categories from high-level semantic and low-level paralinguistic acoustic features. Predicted emotions are mapped to audiovisual representations by an end-to-end process encoding emotion in virtual environments. We use a generative model of chord progressions to transfer speech emotion into music based on the tonal interval space. Also, we implement a generative adversarial network to synthesize an image from the transcribed speech-to-text. The generated visuals are used as the style image in the style-transfer process onto an equirectangular projection of a spherical panorama selected for each emotional category. The result is an immersive virtual space encapsulating emotions in spheres disposed into a 3D environment. Users can create new affective representations or interact with other previously encoded instances (This ArtsIT publication is an extended version of the earlier abstract presented at the ACM MM22 [1]). © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.

2023

Oral rehabilitation of a saxophone player with orofacial pain: a case report

Autores
Clemente, MP; Mendes, J; Bernardes, G; Van Twillert, H; Ferreira, AP; Amarante, JM;

Publicação
JOURNAL OF INTERNATIONAL MEDICAL RESEARCH

Abstract
This paper presents a clinical case study investigating the pattern of a saxophonist's embouchure as a possible origin of orofacial pain. The rehabilitation addressed the dental occlusion and a fracture in a metal ceramic bridge. To evaluate the undesirable loads on the upper teeth, two piezoresistive sensors were placed between the central incisors and the mouthpiece during the embouchure. A newly fixed metal ceramic prosthesis was placed from teeth 13 to 25, and two implants were placed in the premolar zone corresponding to teeth 14 and 15. After the oral rehabilitation, the embouchure force measurements showed that higher stability was promoted by the newly fixed metal-ceramic prosthesis. The musician executed a more symmetric loading of the central incisors (teeth 11 and 21). The functional demands of the saxophone player and consequent application of excessive pressure can significantly influence and modify the metal-ceramic position on the anterior zone teeth 21/22. The contribution of engineering (i.e., monitoring the applied forces on the musician's dental structures) was therefore crucial for the correct assessment and design of the treatment plan.

2023

FluidHarmony: Defining an equal-tempered and hierarchical harmonic lexicon in the Fourier space

Autores
Bernardes, G; Carvalho, N; Pereira, S;

Publicação
JOURNAL OF NEW MUSIC RESEARCH

Abstract
FluidHarmony is an algorithmic method for defining a hierarchical harmonic lexicon in equal temperaments. It utilizes an enharmonic weighted Fourier transform space to represent pitch class set (pcsets) relations. The method ranks pcsets based on user-defined constraints: the importance of interval classes (ICs) and a reference pcset. Evaluation of 5,184 Western musical pieces from the 16th to 20th centuries shows FluidHarmony captures 8% of the corpus's harmony in its top pcsets. This highlights the role of ICs and a reference pcset in regulating harmony in Western tonal music while enabling systematic approaches to define hierarchies and establish metrics beyond 12-TET.

Teses
supervisionadas

2022

The Singing Bridge: Sonification of a Stress-Ribbon Footbridge

Autor
Christian Torresan

Instituição
UP-FEUP

2022

Content-Based (Re)Creation of Loops for Music Performance

Autor
Diogo Miguel Filipe Cocharro

Instituição
UP-FEUP

2022

PICSS - Physically-Inspired Concatenative Sound Synthesis Tool for Continuous Sonic Interaction Design

Autor
Guilherme Dias Santos Pimenta

Instituição
UP-FEUP

2022

Promoting Popular Music Engagement Through Spatial Audio

Autor
José Ricardo Barboza

Instituição
UP-FEUP

2022

Environmental Awareness Through The Development of Sound Installations

Autor
Luís Sequeira Luzia

Instituição
UP-FEUP