Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2016

Deep Learning and Data Labeling for Medical Applications - First International Workshop, LABELS 2016, and Second International Workshop, DLMIA 2016, Held in Conjunction with MICCAI 2016, Athens, Greece, October 21, 2016, Proceedings

Authors
Carneiro, G; Mateus, D; Peter, L; Bradley, A; Tavares, JMRS; Belagiannis, V; Papa, JP; Nascimento, JC; Loog, M; Lu, Z; Cardoso, JS; Cornebise, J;

Publication
LABELS/DLMIA@MICCAI

Abstract

2016

Visual-Inertial Based Autonomous Navigation

Authors
Martins, FD; Teixeira, LF; Nobrega, R;

Publication
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2

Abstract
This paper presents an autonomous navigation and position estimation framework which enables an Unmanned Aerial Vehicle (UAV) to possess the ability to safely navigate in indoor environments. This system uses both the on-board Inertial Measurement Unit (IMU) and the front camera of a AR. Drone platform and a laptop computer were all the data is processed. The system is composed of the following modules: navigation, door detection and position estimation. For the navigation part, the system relies on the detection of the vanishing point using the Hough transform for wall detection and avoidance. The door detection part relies not only on the detection of the contours but also on the recesses of each door using the latter as the main detector and the former as an additional validation for a higher precision. For the position estimation part, the system relies on pre-coded information of the floor in which the drone is navigating, and the velocity of the drone provided by its IMU. Several flight experiments show that the drone is able to safely navigate in corridors while detecting evident doors and estimate its position. The developed navigation and door detection methods are reliable and enable an UAV to fly without the need of human intervention.

2016

User interface design guidelines for smartphone applications for people with Parkinson's disease

Authors
Nunes, F; Silva, PA; Cevada, J; Barros, AC; Teixeira, L;

Publication
UNIVERSAL ACCESS IN THE INFORMATION SOCIETY

Abstract
Parkinson's disease (PD) is often responsible for difficulties in interacting with smartphones; however, research has not yet addressed these issues and how these challenge people with Parkinson's (PwP). This paper specifically investigates the symptoms and characteristics of PD that may influence the interaction with smartphones to then contribute in this direction. The research was based on a literature review of PD symptoms, eight semi-structured interviews with healthcare professionals and observations of PwP, and usability experiments with 39 PwP. Contributions include a list of PD symptoms that may influence the interaction with smartphones, a set of experimental results that evaluated the performance of four gestures tap, swipe, multiple-tap, and drag and 12 user interface design guidelines for creating smartphone user interfaces for PwP. Findings contribute to the work of researchers and practitioners' alike engaged in designing user interfaces for PwP or the broader area of inclusive design.

2016

Bio-inspired Boosting for Moving Objects Segmentation

Authors
Martins, I; Carvalho, P; Corte Real, L; Luis Alba Castro, JL;

Publication
IMAGE ANALYSIS AND RECOGNITION (ICIAR 2016)

Abstract
Developing robust and universal methods for unsupervised segmentation of moving objects in video sequences has proved to be a hard and challenging task. State-of-the-art methods show good performance in a wide range of situations, but systematically fail when facing more challenging scenarios. Lately, a number of image processing modules inspired in biological models of the human visual system have been explored in different areas of application. This paper proposes a bio-inspired boosting method to address the problem of unsupervised segmentation of moving objects in video that shows the ability to overcome some of the limitations of widely used state-of-the-art methods. An exhaustive set of experiments was conducted and a detailed analysis of the results, using different metrics, revealed that this boosting is more significant when challenging scenarios are faced and state-of-the-art methods tend to fail.

2016

Cognition inspired format for the expression of computer vision metadata

Authors
Castro, H; Monteiro, J; Pereira, A; Silva, D; Coelho, G; Carvalho, P;

Publication
MULTIMEDIA TOOLS AND APPLICATIONS

Abstract
Over the last decade noticeable progress has occurred in automated computer interpretation of visual information. Computers running artificial intelligence algorithms are growingly capable of extracting perceptual and semantic information from images, and registering it as metadata. There is also a growing body of manually produced image annotation data. All of this data is of great importance for scientific purposes as well as for commercial applications. Optimizing the usefulness of this, manually or automatically produced, information implies its precise and adequate expression at its different logical levels, making it easily accessible, manipulable and shareable. It also implies the development of associated manipulating tools. However, the expression and manipulation of computer vision results has received less attention than the actual extraction of such results. Hence, it has experienced a smaller advance. Existing metadata tools are poorly structured, in logical terms, as they intermix the declaration of visual detections with that of the observed entities, events and comprising context. This poor structuring renders such tools rigid, limited and cumbersome to use. Moreover, they are unprepared to deal with more advanced situations, such as the coherent expression of the information extracted from, or annotated onto, multi-view video resources. The work here presented comprises the specification of an advanced XML based syntax for the expression and processing of Computer Vision relevant metadata. This proposal takes inspiration from the natural cognition process for the adequate expression of the information, with a particular focus on scenarios of varying numbers of sensory devices, notably, multi-view video.

2016

Video Based Group Tracking and Management

Authors
Pereira, A; Familiar, A; Moreira, B; Terroso, T; Carvalho, P; Corte Real, L;

Publication
IMAGE ANALYSIS AND RECOGNITION (ICIAR 2016)

Abstract
Tracking objects in video is a very challenging research topic, particularly when people in groups are tracked, with partial and full occlusions and group dynamics being common difficulties. Hence, its necessary to deal with group tracking, formation and separation, while assuring the overall consistency of the individuals. This paper proposes enhancements to a group management and tracking algorithm that receives information of the persons in the scene, detects the existing groups and keeps track of the persons that belong to it. Since input information for group management algorithms is typically provided by a tracking algorithm and it is affected by noise, mechanisms for handling such noisy input tracking information were also successfully included. Performed experiments demonstrated that the described algorithm outperformed state-of-the-art approaches.

  • 222
  • 377