2018
Autores
Macas, C; Rodrigues, A; Bernardes, G; Machado, P;
Publicação
2018 22ND INTERNATIONAL CONFERENCE INFORMATION VISUALISATION (IV)
Abstract
We present MixMash, an interactive tool to assist users in the creation of music mashups based on cross-modal associations between musical content analysis and information visualisation. Our point of departure is a harmonic mixing method for musical mashups by Bernardes et al. [1]. To surpass design limitations identified in the previous method, we propose a new interactive visualisation of multidimensional musical attributes-hierarchical harmonic compatibility, onset density, spectral region, and timbral similarity-extracted from a large collection of audio tracks. All tracks are represented as nodes whose distances and edge connections indicate their harmonic compatibility as a result of a force-directed graph. In addition, we provide a visual language that aims to enhance the tool usability and foster creative endeavour in the search for meaningful music mixes.
2018
Autores
Lopes, Filipe; Bernardes, Gilberto; Cardoso, Clara;
Publicação
4th International Conference on Live Interfaces: Inspiration, Performance, Emancipation
Abstract
We present Variações sobre Espaço #6, a mixed media work for saxophone and electronics that intersects music, digital technologies and architecture.
The creative impetus supporting this composition is grounded in the interchange of the following two concepts:
1) the phenomenological exploration
of the aural architecture (Blesse &
Salter 2007) particularly the reverberation as a sonic effect (Augoyard &
Torgue 2005) through music performance and 2) the real time sound
analysis of both the performance and
the reverberation (i.e. impulse
responses) intervallic content — which
ultimately leads to a generic control
over consonance/dissonance (C/D).
Their conceptual and morphological
nature can be understood as sonic
improvisations where the interaction
of sound producing bodies (i.e. the
saxophone) with the real (e.g. performance space) and the imaginary (i.e.
computer) acoustic response of a
space results in formal elements mirroring their physical surroundings.
2018
Autores
Bernardes, Gilberto; Lopes, Filipe; Cardoso, Clara;
Publicação
Resonate, Thinking Sound and Space
Abstract
We present ?Soniferous Resonances?, an ongoing collection of electroacoustic composition
pieces that intersect music, digital technologies and architecture. The creative impetus
supporting this research is grounded in the interchange of the following two concepts: 1) the
phenomenological exploration of the aural architecture [1], particularly the reverberation as a
sonic effect [2] through music performance and 2) the real time sound analysis of both the
performance and the reverberation (i.e. impulse responses) intervallic content — which
ultimately leads to a generic control over consonance/dissonance (C/D). Their conceptual
and morphological nature can be understood as sonic improvisations where the interaction
of sound producing bodies (e.g. saxophone) with the real (e.g. performance space) and the
imaginary (i.e. computer) acoustic response of a space results in formal elements mirroring
their physical surroundings.
Particular emphasis is given to spectromorphological manipulations by a large array of
“contrasting” digital reverberations with extended control over the sound mass [3] and its
musical interval content across a continuum between pitched and consonant to unpitched
and dissonant sounds. Two digital applications developed by the authors are seminal in
Soniferous Resonances?: Wallace [4] and MusikVerb [5]. The first is a navigable user-control
surface that offers a fluid manipulation of audio signals to be convolved with several
“contrasting” digital reverberations. The second offers refined (compositional) control over
the interval content and/or C/D levels computed from the perceptually-inspired Tonal Interval
Space [6] resulting in an automatically adaptation of harmonic content in real time.
Soniferous Resonances? aims at pushing the boundaries of musical performances that are
formally tied to its surrounding space, as well as triggering new concepts and greater
awareness about the sublime qualities of experiencing aural architecture.
2018
Autores
Sequeira A.F.; Chen L.; Ferryman J.; Wild P.; Alonso-Fernandez F.; Bigun J.; Raja K.B.; Raghavendra R.; Busch C.; De Freitas Pereira T.; Marcel S.; Behera S.S.; Gour M.; Kanhangad V.;
Publicação
IEEE International Joint Conference on Biometrics, IJCB 2017
Abstract
This work presents the 2nd Cross-Spectrum Iris/Periocular Recognition Competition (Cross-Eyed2017). The main goal of the competition is to promote and evaluate advances in cross-spectrum iris and periocular recognition. This second edition registered an increase in the participation numbers ranging from academia to industry: five teams submitted twelve methods for the periocular task and five for the iris task. The benchmark dataset is an enlarged version of the dual-spectrum database containing both iris and periocular images synchronously captured from a distance and within a realistic indoor environment. The evaluation was performed on an undisclosed test-set. Methodology, tested algorithms, and obtained results are reported in this paper identifying the remaining challenges in path forward.
2018
Autores
Hofbauer H.; Jalilian E.; Sequeira A.F.; Ferryman J.; Uhl A.;
Publicação
2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems, BTAS 2018
Abstract
The spread of biometric applications in mobile devices handled by untrained users opened the door to sources of noise in mobile iris recognition such as larger extent of rotation in the capture and more off-angle imagery not found so extensively in more constrained acquisition settings. As a result of the limitations of the methods in handling such large degrees of freedom there is often an increase in segmentation errors. In this work, a new near-infrared iris dataset captured with a mobile device is evaluated to analyse, in particular, the rotation observed in images and its impact on segmentation and biometric recognition accuracy. For this study a (manually annotated) ground truth segmentation was used which will be published in tandem with the paper. Similarly to most research challenges in biometrics and computer vision in general, deep learning techniques are proving to outperform classical methods in segmentation methods. The utilization of parameterized CNN-based iris segmentations in biometric recognition is a new but promising field. The results presented show how this CNN-based approach outperformed the segmentation traditional methods with respect to overall recognition accuracy for the dataset under investigation.
2018
Autores
Sequeira, AF; Chen, L; Ferryman, J; Galdi, C; Chiesa, V; Dugelay, JL; Maik, P; Gmitrowicz, P; Szklarski, L; Prommegger, B; Kauba, C; Kirchgasser, S; Uhl, A; Grudzie, A; Kowalski, M;
Publicação
2018 International Conference of the Biometrics Special Interest Group, BIOSIG 2018
Abstract
This work presents a novel multimodal database comprising 3D face, 2D face, thermal face, visible iris, finger and hand veins, voice and anthropometrics. This dataset will constitute a valuable resource to the field with its number and variety of biometric traits. Acquired in the context of the EU PROTECT project, the dataset allows several combinations of biometric traits and envisages applications such as border control. Based upon the results of the unimodal data, a fusion scheme was applied to ascertain the recognition potential of combining these biometric traits in a multimodal approach. Due to the variability on the discriminative power of the traits, a leave the n-best out fusion technique was applied to obtain different recognition results. © 2018 Gesellschaft fuer Informatik.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.