Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Andry Maykol Pinto

2015

Streaming Image Sequences for Vision-Based Mobile Robots

Autores
Pinto, AM; Moreira, AP; Costa, PG;

Publicação
CONTROLO'2014 - PROCEEDINGS OF THE 11TH PORTUGUESE CONFERENCE ON AUTOMATIC CONTROL

Abstract
Vision-based mobile robots have severe limitations related to the computational capabilities that are required for processing their algorithms. The vision algorithms processed onboard and without resorting to specialized computing devices do not achieve the real-time constraints that are imposed by that kind of systems. This paper describes a scheme for streaming image sequences in order to be used by techniques of artificial vision. A mobile robot with this architecture can stream image sequences over the network infrastructure for a device with higher computing power. Therefore, the robot assures the real-time performance with a reduced consumption of energy which increases its autonomy. The experiments conducted without using specialized computers proved that the proposed architecture can stream sequences of images with a resolution of 640x480 at 25 frames per second.

2014

A Flow-based Motion Perception Technique for an Autonomous Robot System

Autores
Pinto, AM; Moreira, AP; Correia, MV; Costa, PG;

Publicação
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
Visual motion perception from a moving observer is the most often encountered case in real life situations. It is a complex and challenging problem, although, it can promote the arising of new applications. This article presents an innovative and autonomous robotic system designed for active surveillance and a dense optical flow technique. Several optical flow techniques have been proposed for motion perception however, most of them are too computationally demanding for autonomous mobile systems. The proposed HybridTree method is able to identify the intrinsic nature of the motion by performing two consecutive operations: expectation and sensing. Descriptive properties of the image are retrieved using a tree-based scheme and during the expectation phase. In the sensing operation, the properties of image regions are used by a hybrid and hierarchical optical flow structure to estimate the flow field. The experiments prove that the proposed method extracts reliable visual motion information in a short period of time and is more suitable for applications that do not have specialized computer devices. Therefore, the HybridTree differs from other techniques since it introduces a new perspective for the motion perception computation: high level information about the image sequence is integrated into the estimation of the optical flow. In addition, it meets most of the robotic or surveillance demands and the resulting flow field is less computationally demanding comparatively to other state-of-the-art methods.

2014

Enhancing dynamic videos for surveillance and robotic applications: The robust bilateral and temporal filter

Autores
Pinto, AM; Costa, PG; Correia, MV; Moreira, AP;

Publicação
SIGNAL PROCESSING-IMAGE COMMUNICATION

Abstract
Over the last few decades, surveillance applications have been an extremely useful tool to prevent dangerous situations and to identify abnormal activities. Although, the majority of surveillance videos are often subjected to different noises that corrupt structured patterns and fine edges. This makes the image processing methods even more difficult, for instance, object detection, motion segmentation, tracking, identification and recognition of humans. This paper proposes a novel filtering technique named robust bilateral and temporal (RBLT), which resorts to a spatial and temporal evolution of sequences to conduct the filtering process while preserving relevant image information. A pixel value is estimated using a robust combination of spatial characteristics of the pixel's neighborhood and its own temporal evolution. Thus, robust statics concepts and temporal correlation between consecutive images are incorporated together which results in a reliable and configurable filter formulation that makes it possible to reconstruct highly dynamic and degraded image sequences. The filtering is evaluated using qualitative judgments and several assessment metrics, for different Gaussian and Salt Pepper noise conditions. Extensive experiments considering videos obtained by stationary and non-stationary cameras prove that the proposed technique achieves a good perceptual quality of filtering sequences corrupted with a strong noise component.

2013

Object recognition using laser range finder and machine learning techniques

Autores
Pinto, AM; Rocha, LF; Paulo Moreira, AP;

Publicação
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING

Abstract
In recent years, computer vision has been widely used on industrial environments, allowing robots to perform important tasks like quality control, inspection and recognition. Vision systems are typically used to determine the position and orientation of objects in the workstation, enabling them to be transported and assembled by a robotic cell (e.g. industrial manipulator). These systems commonly resort to CCD (Charge-Coupled Device) Cameras fixed and located in a particular work area or attached directly to the robotic arm (eye-in-hand vision system). Although it is a valid approach, the performance of these vision systems is directly influenced by the industrial environment lighting. Taking all these into consideration, a new approach is proposed for eye-on-hand systems, where the use of cameras will be replaced by the 2D Laser Range Finder (LRF). The LRF will be attached to a robotic manipulator, which executes a pre-defined path to produce grayscale images of the workstation. With this technique the environment lighting interference is minimized resulting in a more reliable and robust computer vision system. After the grayscale image is created, this work focuses on the recognition and classification of different objects using inherent features (based on the invariant moments of Hu) with the most well-known machine learning models: k-Nearest Neighbor (kNN), Neural Networks (NNs) and Support Vector Machines (SVMs). In order to achieve a good performance for each classification model, a wrapper method is used to select one good subset of features, as well as an assessment model technique called K-fold cross-validation to adjust the parameters of the classifiers. The performance of the models is also compared, achieving performances of 83.5% for kNN, 95.5% for the NN and 98.9% for the SVM (generalized accuracy). These high performances are related with the feature selection algorithm based on the simulated annealing heuristic, and the model assessment (k-fold cross-validation). It makes possible to identify the most important features in the recognition process, as well as the adjustment of the best parameters for the machine learning models, increasing the classification ratio of the work objects present in the robot's environment.

2013

Revisiting Lucas-Kanade and Horn-Schunck

Autores
Pinto, AMG; Moreira, AP; Costa, PG; Correia, MV;

Publicação
JCEI - Journal of Computer Engineering and Informatics

Abstract

2014

Unsupervised flow-based motion analysis for an autonomous moving system

Autores
Pinto, AM; Correia, MV; Paulo Moreira, AP; Costa, PG;

Publicação
IMAGE AND VISION COMPUTING

Abstract
This article discusses the motion analysis based on dense optical flow fields and for a new generation of robotic moving systems with real-time constraints. It focuses on a surveillance scenario where an especially designed autonomous mobile robot uses a monocular camera for perceiving motion in the environment. The computational resources and the processing-time are two of the most critical aspects in robotics and therefore, two non-parametric techniques are proposed, namely, the Hybrid Hierarchical Optical Flow Segmentation and the Hybrid Density-Based Optical Flow Segmentation. Both methods are able to extract the moving objects by performing two consecutive operations: refining and collecting. During the refining phase, the flow field is decomposed in a set of clusters and based on descriptive motion properties. These properties are used in the collecting stage by a hierarchical or density-based scheme to merge the set of clusters that represent different motion models. In addition, a model selection method is introduced. This novel method analyzes the flow field and estimates the number of distinct moving objects using a Bayesian formulation. The research evaluates the performance achieved by the methods in a realistic surveillance situation. The experiments conducted proved that the proposed methods extract reliable motion information in real-time and without using specialized computers. Moreover, the resulting segmentation is less computationally demanding compared to other recent methods and therefore, they are suitable for most of the robotic or surveillance applications.

  • 2
  • 11