2012
Authors
Costa, P; Fernandes, H; Martins, P; Barroso, J; Hadjileontiadis, LJ;
Publication
PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION (DSAI 2012)
Abstract
Assistive technology enables people to achieve independence when performing daily tasks and it enhances their overall quality of life. Visual information is the basis for most navigational tasks, so visually impaired individuals are at disadvantage due to the lack of sufficient information about their surrounding environment. With recent advances in inclusive technology it is possible to extend the support given to people with visual disabilities in terms of their mobility. In this context we propose and describe the Blavigator project, whose global objective is to assist visually impaired people in their navigation on indoor and outdoor environments. This paper is focused mainly on the Computer Vision module of the Blavigator prototype. We propose an object collision detection algorithm based on disparity images. The proposed algorithm uses a 2D Ensemble Empirical Mode Decomposition image optimization algorithm and a two layer disparity image segmentation to detect nearby objects. (C) 2012 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of the Scientific Programme Committee of the 4th International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2012)
2012
Authors
Xavier, J; Sousa, AMR; Morais, JJL; Filipe, VMJ; Vaz, M;
Publication
OPTICAL ENGINEERING
Abstract
A digital image correlation (DIC) algorithm for displacement measurements combining cross-correlation and a differential technique was validated through a set of experimental tests. These tests consisted of in-plane rigid-body translation and rotation tests, a tensile mechanical test, and a mode I fracture test. The fracture mechanical test, in particular, was intended to assess the accuracy of the method when dealing with discontinuous displacement fields, for which subset-based image correlation methods usually give unreliable results. The proposed algorithm was systematically compared with the Aramis (R) DIC-2D commercial code by processing the same set of images. When processing images from rigid-body and tensile tests (associated with continuous displacement fields), the two methods provided equivalent results. When processing images from the fracture mechanical test, however, the proposed method obtained a better qualitative description of the discontinuous displacements. Moreover, the proposed method gave a more reliable estimation of both crack length and crack opening displacement of the fractured specimen.(C) (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). [DOI: 10.1117/1.OE.51.4.043602]
2012
Authors
Filipe, V; Fernandes, F; Fernandes, H; Sousa, A; Paredes, H; Barroso, J;
Publication
PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE DEVELOPMENT FOR ENHANCING ACCESSIBILITY AND FIGHTING INFO-EXCLUSION (DSAI 2012)
Abstract
This paper presents a system which extends the use of the traditional white cane by the blind for navigation purposes in indoor environments. Depth data of the scene in front of the user is acquired using the Microsoft Kinect sensor which is then mapped into a pattern representation. Using neural networks, the proposed system uses this information to extract relevant features from the scene, enabling the detection of possible obstacles along the way. The results show that the neural network is able to correctly classify the type of pattern presented as input.
2012
Authors
Costa, MI; Barroso, J; Soares, S;
Publication
COMPUTER APPLICATIONS IN ENGINEERING EDUCATION
Abstract
This article presents an educational tool to be used in signal processing interpolation-related subjects. The aim is to contribute to the better consolidation of acquired theoretical knowledge, allowing students to test signal reconstruction algorithms and visualize the results obtained by the usage of such algorithms, and how several parameters affect their convergence and performance. (c) 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 20: 356363, 2012
2012
Authors
Costa, P; Barroso, J; Fernandes, H; Hadjileontiadis, LJ;
Publication
EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING
Abstract
Empirical mode decomposition (EMD) is a fully unsupervised and data-driven approach to the class of nonlinear and non-stationary signals. A new approach is proposed, namely PHEEMD, to image analysis by using Peano-Hilbert space filling curves to transform 2D data (image) into 1D data, followed by ensemble EMD (EEMD) analysis, i.e., a more robust realization of EMD based on white noise excitation. Tests' results have shown that PHEEMD exhibits a substantially reduced computational cost compared to other 2D-EMD approaches, preserving, simultaneously, the information lying at the EMD domain; hence, new perspectives for its use in low computational power devices, like portable applications, are feasible.
2012
Authors
Almeida, R; Oliveira, P; Braga, L; Barroso, J;
Publication
Proceedings - IEEE 6th International Conference on Semantic Computing, ICSC 2012
Abstract
The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert / specialist to be executed over that database, thus enabling their interoperability. © 2012 IEEE.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.