Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Carlos Miguel Costa

2016

2D Cloud Template Matching - A comparison between Iterative Closest Point and Perfect Match

Authors
Sobreira, H; Rocha, L; Costa, C; Lima, J; Costa, P; Paulo Moreira, AP;

Publication
2016 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2016)

Abstract
Self-localization of mobile robots in the environment is one of the most fundamental problems in the robotics field. It is a complex and challenging problem due to the high requirements of autonomous mobile vehicles, particularly with regard to algorithms accuracy, robustness and computational efficiency. In this paper we present the comparison of two of the most used map-matching algorithm, which are the Iterative Closest Point and the Perfect Match. This category of algorithms are normally applied in localization based on natural landmarks. They were compared using an extensive collection of metrics, such as accuracy, computational efficiency, convergence speed, maximum admissible initialization error and robustness to outliers in the robots sensors data. The test results were performed in both simulated and real world environments.

2015

3 DoF/6 DoF Localization System for Low Computing Power Mobile Robot Platforms

Authors
Costa, CM; Sobreira, HM; Sousa, AJ; Veiga, G;

Publication
Cutting Edge Research in Technologies

Abstract

2016

Recognition of Banknotes in Multiple Perspectives Using Selective Feature Matching and Shape Analysis

Authors
Costa, CM; Veiga, G; Sousa, A;

Publication
2016 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC 2016)

Abstract
Reliable banknote recognition is critical for detecting counterfeit banknotes in ATMs and help visual impaired people. To solve this problem, it was implemented a computer vision system that can recognize multiple banknotes in different perspective views and scales, even when they are within cluttered environments in which the lighting conditions may vary considerably. The system is also able to recognize banknotes that are partially visible, folded, wrinkled or even worn by usage. To accomplish this task, the system relies on computer vision algorithms, such as image preprocessing, feature detection, description and matching. To improve the confidence of the banknote recognition the feature matching results are used to compute the contour of the banknotes using an homography that later on is validated using shape analysis algorithms. The system successfully recognized all Euro banknotes in 80 test images even when there were several overlapping banknotes in the same test image.

2017

Evaluation of Stanford NER for extraction of assembly information from instruction manuals

Authors
Costa, CM; Veiga, G; Sousa, A; Nunes, S;

Publication
2017 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2017, Coimbra, Portugal, April 26-28, 2017

Abstract
Teaching industrial robots by demonstration can significantly decrease the repurposing costs of assembly lines worldwide. To achieve this goal, the robot needs to detect and track each component with high accuracy. To speedup the initial object recognition phase, the learning system can gather information from assembly manuals in order to identify which parts and tools are required for assembling a new product (avoiding exhaustive search in a large model database) and if possible also extract the assembly order and spatial relation between them. This paper presents a detailed analysis of the fine tuning of the Stanford Named Entity Recognizer for this text tagging task. Starting from the recommended configuration, it was performed 91 tests targeting the main features / parameters. Each test only changed a single parameter in relation to the recommend configuration, and its goal was to see the impact of the new configuration in the precision, recall and F1 metrics. This analysis allowed to fine tune the Stanford NER system, achieving a precision of 89.91%, recall of 83.51% and F1 of 84.69%. These results were retrieved with our new manually annotated dataset containing text with assembly operations for alternators, gearboxes and engines, which were written in a language discourse that ranges from professional to informal. The dataset can also be used to evaluate other information extraction and computer vision systems, since most assembly operations have pictures and diagrams showing the necessary product parts, their assembly order and relative spatial disposition. © 2017 IEEE.

2017

Beam for the steel fabrication industry robotic systems

Authors
Rocha, LF; Tavares, P; Malaca, P; Costa, C; Silva, J; Veiga, G;

Publication
ISARC 2017 - Proceedings of the 34th International Symposium on Automation and Robotics in Construction

Abstract
In this paper, we present a comparison between the older DSTV file format and the newer version of the IFC standard, dedicating special attention of its impact in the robotization of welding and cutting processes in the steel structure fabrication industry. In the last decade, we have seen in this industry a significant increase in the request for automation. These new requirements are imposed by a market focused on the productivity enhancement through automation. Because of this paradigm change, the information structure and workflow provided by the DSTV format needed to be revised, namely the one related with the plan and management of steel fabrication processes. Therefore, with this work we enhance the importance of the increased digitalization of information that the newer version of the IFC standard provide, by showing how this information can be used to develop advanced robotic cells. More in detail, we will focus on the automatic generation of robot welding and cutting trajectories, and in the automatic part assembly planning during components fabrications. Besides these advantages, as this information is normally described having as base a perfect CAD model of the metallic structure, the resultant robot trajectories will have some dimensional error when fitted with the real physical component. Hence, we also present some automatic approaches based on a laser scanner and simple heuristics to overcome this limitations.

2017

Pose Invariant Object Recognition Using a Bag of Words Approach

Authors
Costa, CM; Sousa, A; Veiga, G;

Publication
ROBOT 2017: Third Iberian Robotics Conference - Volume 2, Seville, Spain, November 22-24, 2017.

Abstract
Pose invariant object detection and classification plays a critical role in robust image recognition systems and can be applied in a multitude of applications, ranging from simple monitoring to advanced tracking. This paper analyzes the usage of the Bag of Words model for recognizing objects in different scales, orientations and perspective views within cluttered environments. The recognition system relies on image analysis techniques, such as feature detection, description and clustering along with machine learning classifiers. For pinpointing the location of the target object, it is proposed a multiscale sliding window approach followed by a dynamic thresholding segmentation. The recognition system was tested with several configurations of feature detectors, descriptors and classifiers and achieved an accuracy of 87% when recognizing cars from an annotated dataset with 177 training images and 177 testing images. © Springer International Publishing AG 2018.

  • 1
  • 5