Cookies
Usamos cookies para melhorar nosso site e a sua experiência. Ao continuar a navegar no site, você aceita a nossa política de cookies. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

002
Publicações

2020

Objective Evaluation of Tonal Fitness for Chord Progressions Using the Tonal Interval Space

Autores
Cáceres, MN; Caetano, MF; Bernardes, G;

Publicação
Artificial Intelligence in Music, Sound, Art and Design - 9th International Conference, EvoMUSART 2020, Held as Part of EvoStar 2020, Seville, Spain, April 15-17, 2020, Proceedings

Abstract
Chord progressions are core elements of Western tonal harmony regulated by multiple theoretical and perceptual principles. Ideally, objective measures to evaluate chord progressions should reflect their tonal fitness. In this work, we propose an objective measure of the fitness of a chord progression within the Western tonal context computed in the Tonal Interval Space, where distances capture tonal music principles. The measure considers four parameters, namely tonal pitch distance, consonance, hierarchical tension and voice leading between the chords in the progression. We performed a listening test to perceptually assess the proposed tonal fitness measure across different chord progressions, and compared the results with existing related models. The perceptual rating results show that our objective measure improves the estimation of a chord progression’s tonal fitness in comparison with existing models. © Springer Nature Switzerland AG 2020.

2020

Physics-based Concatenative Sound Synthesis of Photogrammetric models for Aural and Haptic Feedback in Virtual Environments

Autores
Magalhaes, E; Jacob, J; Nilsson, N; Nordahl, R; Bernardes, G;

Publicação
Proceedings - 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, VRW 2020

Abstract
We present a novel physics-based concatenative sound synthesis (CSS) methodology for congruent interactions across physical, graphical, aural and haptic modalities in Virtual Environments. Navigation in aural and haptic corpora of annotated audio units is driven by user interactions with highly realistic photogrammetric based models in a game engine, where automated and interactive positional, physics and graphics data are supported. From a technical perspective, the current contribution expands existing CSS frameworks in avoiding mapping or mining the annotation data to real-time performance attributes, while guaranteeing degrees of novelty and variation for the same gesture. © 2020 IEEE.

2020

Sound design inducing attention in the context of audiovisual immersive environments

Autores
Salselas, I; Penha, R; Bernardes, G;

Publicação
Personal and Ubiquitous Computing

Abstract
Sound design has been a fundamental component of audiovisual storytelling in linear media. However, with recent technological developments and the shift towards non-linear and immersive media, things are rapidly changing. More sensory information is available and, at the same time, the user is gaining agency upon the narrative, being offered the possibility of navigating or making other decisions. These new characteristics of immersive environments bring new challenges to storytelling in interactive narratives and require new strategies and techniques for audiovisual narrative progression. Can technology offer an immersive environment where the user has the sensation of agency, of choice, where her actions are not mediated by evident controls but subliminally induced in a way that it is ensured that a narrative is being followed? Can sound be a subliminal element that induces attentional focus on the most relevant elements for the narrative, inducing storytelling and biasing search in an immersive non-linear audiovisual environment? Herein, we present a literature review that has been guided by this prospect. With these questions in view, we present our exploration process in finding possible answers and potential solution paths. We point out that consistency, in terms of coherency across sensory modalities and emotional matching may be a critical aspect. Finally, we consider that this review may open up new paths for experimental studies that could, in the future, provide new strategies in the practice of sound design in the context of non-linear media. © 2020, Springer-Verlag London Ltd., part of Springer Nature.

2020

A Computational Model of Tonal Tension Profile of Chord Progressions in the Tonal Interval Space

Autores
Navarro-Cáceres, M; Caetano, M; Bernardes, G; Sánchez-Barba, M; Merchán Sánchez-Jara, J;

Publicação
Entropy

Abstract
In tonal music, musical tension is strongly associated with musical expression, particularly with expectations and emotions. Most listeners are able to perceive musical tension subjectively, yet musical tension is difficult to be measured objectively, as it is connected with musical parameters such as rhythm, dynamics, melody, harmony, and timbre. Musical tension specifically associated with melodic and harmonic motion is called tonal tension. In this article, we are interested in perceived changes of tonal tension over time for chord progressions, dubbed tonal tension profiles. We propose an objective measure capable of capturing tension profile according to different tonal music parameters, namely, tonal distance, dissonance, voice leading, and hierarchical tension. We performed two experiments to validate the proposed model of tonal tension profile and compared against Lerdahl’s model and MorpheuS across 12 chord progressions. Our results show that the considered four tonal parameters contribute differently to the perception of tonal tension. In our model, their relative importance adopts the following weights, summing to unity: dissonance (0.402), hierarchical tension (0.246), tonal distance (0.202), and voice leading (0.193). The assumption that listeners perceive global changes in tonal tension as prototypical profiles is strongly suggested in our results, which outperform the state-of-the-art models.

2019

A new classification of wind instruments: Orofacial considerations

Autores
Clemente, M; Mendes, J; Moreira, A; Bernardes, G; Van Twillert, H; Ferreira, A; Amarante, JM;

Publicação
Journal of Oral Biology and Craniofacial Research

Abstract
Background/objective: Playing a wind instrument implies rhythmic jaw movements where the embouchure applies forces with different directions and intensities towards the orofacial structures. These features are relevant when comparing the differences between a clarinettist and a saxophone player embouchure, independently to the fact that both belong to the single-reed instrument group, making therefore necessary to update the actual classification. Methods: Lateral cephalograms were taken to single-reed, double-reed and brass instrumentalists with the purpose of analyzing the relationship of the mouthpiece and the orofacial structures. Results: The comparison of the different wind instruments showed substantial differences. Therefore the authors purpose a new classification of wind instruments: Class 1 single-reed mouthpiece, division 1– clarinet, division 2 –saxophone; Class 2 double-reed instruments, division 1– oboe, division 2– bassoon; Class 3 cup-shaped mouthpiece, division 1– trumpet and French horn, division 2- trombone and tuba; Class 4 aperture mouthpieces, division 1– flute, division 2 – transversal flute and piccolo. Conclusions: Elements such as dental arches, teeth and lips, assume vital importance at a new nomenclature and classification of woodwind instruments that were in the past mainly classified by the type of mouthpiece and not taking into consideration its relationship with their neighboring structures. © 2019 Craniofacial Research Foundation

Teses
supervisionadas

2017

Moto-Var: towards new paths in interactive-assisted composition

Autor
Alonso Torres-Matarrita

Instituição
UP-FEUP

2017

Computer Sound Transformations Guided by Perceptually Motivated Features

Autor
Nuno Figueiredo Pires

Instituição
UP-FEUP

2017

Musically-Informed Adaptive Audio Reverberation

Autor
João Paulo Caetano Pereira Carvalheira Neves

Instituição
UP-FEUP

2017

Compor para Aprender

Autor
Isabela Corintha de Almeida

Instituição
UP-FEUP

2017

Performative sound design

Autor
Luís Alberto Teixeira Aly

Instituição
UP-FEUP