Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2019

MixMash

Authors
Maçãs, C; Rodrigues, A; Bernardes, G; Machado, P;

Publication
International Journal of Art, Culture and Design Technologies

Abstract
This article presents MixMash, an interactive tool which streamlines the process of music mashup creation by assisting users in the process of finding compatible music from a large collection of audio tracks. It extends the harmonic mixing method by Bernardes, Davies and Guedes with novel degrees of harmonic, rhythmic, spectral, and timbral similarity metrics. Furthermore, it revises and improves some interface design limitations identified in the former model software implementation. A new user interface design based on cross-modal associations between musical content analysis and information visualisation is presented. In this graphic model, all tracks are represented as nodes where distances and edge connections display their harmonic compatibility as a result of a force-directed graph. Besides, a visual language is defined to enhance the tool's usability and foster creative endeavour in the search of meaningful music mashups.

2019

Qualia: A software for guided meta-improvisation performance

Authors
Brásio, M; Lopes, F; Bernardes, G; Penha, R;

Publication
Proceedings of the 14th Sound and Music Computing Conference 2017, SMC 2017

Abstract
In this paper we present Qualia, a software for real-time generation of graphical scores driven by the audio analysis of the performance of a group of musicians. With Qualia, the composer analyses and maps the flux of data to specific score instructions, thus, becoming part of the performance itself. Qualia is intended for collaborative performances. In this context, the creative process to compose music not only challenges musicians to improvise collaboratively through active listening, as typical, but also requires them to interpret graphical instructions provided by Qualia. The performance is then an interactive process based on “feedback” between the sound produced by the musicians, the flow of data managed by the composer and the corresponding graphical output interpreted by each musician. Qualia supports the exploration of relationships between composition and performance, promoting engagement strategies in which musicians participate actively using their instrument. © 2017 Manuel Brásio et al. This is an open-access article dis- tributed under the terms of the Creative Commons Attribution License 3.0 Unported, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

2019

Beatings: A web application to foster the renaissance of the art of musical temperaments

Authors
Penha, R; Bernardes, G;

Publication
SMC 2016 - 13th Sound and Music Computing Conference, Proceedings

Abstract
In this article we present beatings, a web application for the exploration of tuning and temperaments which pays particular attention to auditory phenomena resulting from the interaction of the spectral components of a sound, and in particular to the pitch fusion and the amplitude modulations occurring between the spectral peaks a critical bandwidth apart. By providing a simple, yet effective, visualization of the temporal evolution of this auditory phenomena we aim to foster new research in the pursuit of perceptually grounded principles explaining Western tonal harmonic syntax, as well as provide a tool for musical practice and education, areas where the old art of musical tunings and temperaments, with the notable exception of early music studies, appears to have long been neglected in favour of the practical advantages of equal temperament. Copyright: © 2016 Rui Penha et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

2019

Combining texture-derived vibrotactile feedback, concatenative synthesis and photogrammetry for virtual reality rendering

Authors
Magalhães, E; Høeg, ER; Bernardes, G; Bruun Pedersen, JR; Serafin, S; Nordahl, R;

Publication
Proceedings of the Sound and Music Computing Conferences

Abstract
This paper describes a novel framework for real-time sonification of surface textures in virtual reality (VR), aimed towards realistically representing the experience of driving over a virtual surface. A combination of capturing techniques of real-world surfaces are used for mapping 3D geometry, texture maps or auditory attributes (aural and vibrotactile) feedback. For the sonification rendering, we propose the use of information from primarily graphical texture features, to define target units in concatenative sound synthesis. To foster models that go beyond current generation of simple sound textures (e.g., wind, rain, fire), towards highly “synchronized” and expressive scenarios, our contribution draws a framework for higher-level modeling of a bicycle's kinematic rolling on ground contact, with enhanced perceptual symbiosis between auditory, visual and vibrotactile stimuli. We scanned two surfaces represented as texture maps, consisting of different features, morphology and matching navigation. We define target trajectories in a 2-dimensional audio feature space, according to a temporal model and morphological attributes of the surfaces. This synthesis method serves two purposes: a real-time auditory feedback, and vibrotactile feedback induced through playing back the concatenated sound samples using a vibrotactile inducer speaker. Copyright: © 2019 Eduardo Magalhães et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

2019

Never the less: a performance on networked art

Authors
Arandas, Luis; Gomes, José Alberto; Bernardes, Gilberto; Penha, Rui;

Publication
Proceedings of the 7th Conference on Computation - Communication Aesthetics & X - XCOAX 2019

Abstract
Never The Less is a live audio-visual (A/V) networked performance, where participants are able to interact remotely and collaboratively. It adopts the newly-proposed web-based A/V Akson system, designed for an internet infrastructure, which allows both musical and visual content generation and interaction across multiple devices in remote locations. The system was built with great emphasis on live-performance and human collaboration, where experts and non-experts (i.e., artists and public) exist at the same level.

2019

Joint User Mobility and Traffic Characterization in Temporary Crowded Events

Authors
Valadar, A; Almeida, EN; Mamede, J;

Publication
CoRR

Abstract

  • 161
  • 374