2019
Authors
Clemente, M; Mendes, J; Moreira, A; Bernardes, G; Van Twillert, H; Ferreira, A; Amarante, JM;
Publication
Journal of Oral Biology and Craniofacial Research
Abstract
Background/objective: Playing a wind instrument implies rhythmic jaw movements where the embouchure applies forces with different directions and intensities towards the orofacial structures. These features are relevant when comparing the differences between a clarinettist and a saxophone player embouchure, independently to the fact that both belong to the single-reed instrument group, making therefore necessary to update the actual classification. Methods: Lateral cephalograms were taken to single-reed, double-reed and brass instrumentalists with the purpose of analyzing the relationship of the mouthpiece and the orofacial structures. Results: The comparison of the different wind instruments showed substantial differences. Therefore the authors purpose a new classification of wind instruments: Class 1 single-reed mouthpiece, division 1– clarinet, division 2 –saxophone; Class 2 double-reed instruments, division 1– oboe, division 2– bassoon; Class 3 cup-shaped mouthpiece, division 1– trumpet and French horn, division 2- trombone and tuba; Class 4 aperture mouthpieces, division 1– flute, division 2 – transversal flute and piccolo. Conclusions: Elements such as dental arches, teeth and lips, assume vital importance at a new nomenclature and classification of woodwind instruments that were in the past mainly classified by the type of mouthpiece and not taking into consideration its relationship with their neighboring structures. © 2019 Craniofacial Research Foundation
2019
Authors
Bernardes, G; Aly, L; Davies, MEP;
Publication
SMC 2016 - 13th Sound and Music Computing Conference, Proceedings
Abstract
In this paper we present SEED, a generative system capable of arbitrarily extending recorded environmental sounds while preserving their inherent structure. The system architecture is grounded in concepts from concatenative sound synthesis and includes three top-level modules for segmentation, analysis, and generation. An input audio signal is first temporally segmented into a collection of audio segments, which are then reduced into a dictionary of audio classes by means of an agglomerative clustering algorithm. This representation, together with a concatenation cost between audio segment boundaries, is finally used to generate sequences of audio segments with arbitrarily long duration. The system output can be varied in the generation process by the simple and yet effective parametric control over the creation of the natural, temporally coherent, and varied audio renderings of environmental sounds. Copyright: © 2016 First author et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
2019
Authors
Navarro Caceres, M; Caetano, M; Bernardes, G; de Castro, LN;
Publication
SWARM AND EVOLUTIONARY COMPUTATION
Abstract
Chord progressions play an important role in Western tonal music. For a novice composer, the creation of chord progressions can be challenging because it involves many subjective factors, such as the musical context, personal preference and aesthetic choices. This work proposes ChordAIS, an interactive system that assists the user in generating chord progressions by iteratively adding new chords. At each iteration a search for the next candidate chord is performed in the Tonal Interval Space (TIS), where distances capture perceptual features of pitch configurations on different levels, such as musical notes, chords, and scales. We use an artificial immune system (AIS) called opt-aiNet to search for candidate chords by optimizing an objective function that encodes desirable musical properties of chord progressions as distances in the TIS. Opt-aiNet is capable of finding multiple optima of multi-modal functions simultaneously, resulting in multiple good-quality candidate chords which can be added to the progression by the user. To validate ChordAIS, we performed different experiments and a listening test to evaluate the perceptual quality of the candidate chords proposed by ChordAIS. Most listeners rated the chords proposed by ChordAIS as better candidates for progressions than the chords discarded by ChordAIS. Then, we compared ChordAIS with two similar systems, ConChord and ChordGA, which uses a standard GA instead of opt-aiNet. A user test showed that ChordAIS was preferred over ChordGA and Conchord. According to the results, ChordAlS was deemed capable of assisting the users in the generation of tonal chord progressions by proposing good-quality candidates in all the keys tested.
2019
Authors
Maçãs, C; Rodrigues, A; Bernardes, G; Machado, P;
Publication
International Journal of Art, Culture and Design Technologies
Abstract
2019
Authors
Brásio, M; Lopes, F; Bernardes, G; Penha, R;
Publication
Proceedings of the 14th Sound and Music Computing Conference 2017, SMC 2017
Abstract
In this paper we present Qualia, a software for real-time generation of graphical scores driven by the audio analysis of the performance of a group of musicians. With Qualia, the composer analyses and maps the flux of data to specific score instructions, thus, becoming part of the performance itself. Qualia is intended for collaborative performances. In this context, the creative process to compose music not only challenges musicians to improvise collaboratively through active listening, as typical, but also requires them to interpret graphical instructions provided by Qualia. The performance is then an interactive process based on “feedback” between the sound produced by the musicians, the flow of data managed by the composer and the corresponding graphical output interpreted by each musician. Qualia supports the exploration of relationships between composition and performance, promoting engagement strategies in which musicians participate actively using their instrument. © 2017 Manuel Brásio et al. This is an open-access article dis- tributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
2019
Authors
Penha, R; Bernardes, G;
Publication
SMC 2016 - 13th Sound and Music Computing Conference, Proceedings
Abstract
In this article we present beatings, a web application for the exploration of tuning and temperaments which pays particular attention to auditory phenomena resulting from the interaction of the spectral components of a sound, and in particular to the pitch fusion and the amplitude modulations occurring between the spectral peaks a critical bandwidth apart. By providing a simple, yet effective, visualization of the temporal evolution of this auditory phenomena we aim to foster new research in the pursuit of perceptually grounded principles explaining Western tonal harmonic syntax, as well as provide a tool for musical practice and education, areas where the old art of musical tunings and temperaments, with the notable exception of early music studies, appears to have long been neglected in favour of the practical advantages of equal temperament. Copyright: © 2016 Rui Penha et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.