Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Andry Maykol Pinto

2017

Visual motion perception for mobile robots through dense optical flow fields

Autores
Pinto, AM; Costa, PG; Correia, MV; Matos, AC; Moreira, AP;

Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
Recent advances in visual motion detection and interpretation have made possible the rising of new robotic systems for autonomous and active surveillance. In this line of research, the current work discusses motion perception by proposing a novel technique that analyzes dense flow fields and distinguishes several regions with distinct motion models. The method is called Wise Optical Flow Clustering (WOFC) and extracts the moving objects by performing two consecutive operations: evaluating and resetting. Motion properties of the flow field are retrieved and described in the evaluation phase, which provides high level information about the spatial segmentation of the flow field. During the resetting operation, these properties are combined and used to feed a guided segmentation approach. The WOFC requires information about the number of motion models and, therefore, this paper introduces a model selection method based on a Bayesian approach that balances the model's fitness and complexity. It combines the correlation of a histogram-based analysis with the decay ratio of the normalized entropy criterion. This approach interprets the flow field and gives an estimative about the number of moving objects. The experiments conducted in a realistic environment have proved that the WOFC presents several advantages that meet the requirements of common robotic and surveillance applications: is computationally efficient and provides a pixel-wise segmentation, comparatively to other state-of-the-art methods.

2016

WirelessSyncroVision: Wireless synchronization for industrial stereoscopic systems

Autores
Pinto, AM; Moreira, AP; Costa, PG;

Publicação
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY

Abstract
The research proposes a novel technological solution for marker-based human motion capture called WirelessSyncroVision (WSV). The WSV is formed by two main modules: the visual node (WSV-V) which is based on a stereoscopic vision system and the marker node (WSV-M) that is constituted by a 6-DOF active marker. The solution synchronizes the acquisition of images in remote muti-cameras with the ON period of the active marker. This increases the robustness of the stereoscopic system to illumination changes, which is extremely relevant for programming industrial robotic-arms using a human demonstrator programming by demonstration (PbD). In addition, the research presents a robust method named Adaptive and Robust Synchronization (ARS), that is designed for temporal alignment of remote devices using a wireless network. The algorithm models the phase difference as a function of time, measuring the parameters that must be known to predict the synchronization instant between the active marker and the remote cameras. Results demonstrate that the ARS creates a balance between the real-time capability and the performance estimation of the phase difference. Therefore, this research proposes an elegant solution to synchronize image acquisition systems in real-time that is easy to implement with low operational costs; however, the major advantage of the WSV is related to its high level of flexibility since it can be extended toward to other devices besides the PbD, for instance, motion capture, motion analysis, and remote sensoring systems.

2016

A Mosaicking Approach for Visual Mapping of Large-Scale Environments

Autores
Pinto, AM; Pinto, H; Matos, AC;

Publicação
2016 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2016)

Abstract
Nowadays, the technological and scientific research related to underwater perception is focused in developing more cost-effective tools to support activities related with the inspection, search and rescue of wreckages and site exploration: devices with higher autonomy, endurance and capabilities. Currently, specific tasks are already carried out by remotely-operated vehicles (ROV) and autonomous underwater vehicles (AUV) that can be equipped with multiple sensors, including optical cameras which are extremely valuable for perceiving marine environments; however, the current perceptual capability of these vehicles is still limited. In this context, the paper presents a novel mosaicking method that composes the sea-floor from a set of visual observations. This method is called RObust and Large-scale MOSaicking (ROLAMOS) and it enables an efficient frame-to-frame motion estimation with outliers removal and consistence checking, a robust registration of monocular images and, finally, a mosaic management methodology that makes it possible to map large visual areas with a high resolution. The experiments conducted with realistic images have proven that the ROLAMOS is suitable for mapping large-scale sea-floor scenarios because the visual information is registered by managing the computational resources that are available onboard, which makes it appropriate for applications that do not have specialized computers. Further, this is a major advantage for automatic mosaic creation based on robotic applications, that require the location of objects or even structures with high detail and precision.

2018

Comparative Study of Visual Odometry and SLAM Techniques

Autores
Gaspar, AR; Nunes, A; Pinto, A; Matos, A;

Publicação
Advances in Intelligent Systems and Computing

Abstract
The use of the odometry and SLAM visual methods in autonomous vehicles has been growing. Optical sensors provide valuable information from the scenario that enhance the navigation of autonomous vehicles. Although several visual techniques are already available in the literature, their performance could be significantly affected by the scene captured by the optical sensor. In this context, this paper presents a comparative analysis of three monocular visual odometry methods and three stereo SLAM techniques. The advantages, particularities and performance of each technique are discussed, to provide information that is relevant for the development of new research and novel robotic applications. © Springer International Publishing AG 2018.

2018

Urban@CRAS dataset: Benchmarking of visual odometry and SLAM techniques

Autores
Gaspar, AR; Nunes, A; Pinto, AM; Matos, A;

Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
Public datasets are becoming extremely important for the scientific and industrial community to accelerate the development of new approaches and to guarantee identical testing conditions for comparing methods proposed by different researchers. This research presents the Urban@CRAS dataset that captures several scenarios of one iconic region at Porto Portugal These scenario presents a multiplicity of conditions and urban situations including, vehicle-to-vehicle and vehicle-to-human interactions, cross-sides, turn-around, roundabouts and different traffic conditions. Data from these scenarios are timestamped, calibrated and acquired at 10 to 200 Hz by through a set of heterogeneous sensors installed in a roof of a car. These sensors include a 3D LIDAR, high-resolution color cameras, a high-precision IMU and a GPS navigation system. In addition, positioning information obtained from a real-time kinematic satellite navigation system (with 0.05m of error) is also included as ground-truth. Moreover, a benchmarking process for some typical methods for visual odometry and SLAM is also included in this research, where qualitative and quantitative performance indicators are used to discuss the advantages and particularities of each implementation. Thus, this research fosters new advances on the perception and navigation approaches of autonomous robots (and driving).

2018

A Safety Monitoring Model for a Faulty Mobile Robot

Autores
Leite, A; Pinto, A; Matos, A;

Publicação
ROBOTICS

Abstract
The continued development of mobile robots (MR) must be accompanied by an increase in robotics' safety measures. Not only must MR be capable of detecting and diagnosing faults, they should also be capable of understanding when the dangers of a mission, to themselves and the surrounding environment, warrant the abandonment of their endeavors. Analysis of fault detection and diagnosis techniques helps shed light on the challenges of the robotic field, while also showing a lack of research in autonomous decision-making tools. This paper proposes a new skill-based architecture for mobile robots, together with a novel risk assessment and decision-making model to overcome the difficulties currently felt in autonomous robot design.

  • 4
  • 11