Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Miguel Riem Oliveira

2016

A Hybrid Top-Down Bottom-Up Approach for the Detection of Cuboid Shaped Objects

Authors
Arrais, R; Oliveira, M; Toscano, C; Veiga, G;

Publication
IMAGE ANALYSIS AND RECOGNITION (ICIAR 2016)

Abstract
While bottom-up approaches to object recognition are simple to design and implement, they do not yield the same performance as top-down approaches. On the other hand, it is not trivial to obtain a moderate number of plausible hypotheses to be efficiently verified by top-down approaches. To address these shortcomings, we propose a hybrid top-down bottom-up approach to object recognition where a bottom-up procedure that generates a set of hypothesis based on data is combined with a top-down process for evaluating those hypotheses. We use the recognition of rectangular cuboid shaped objects from 3D point cloud data as a benchmark problem for our research. Results obtained using this approach demonstrate promising recognition performances.

2015

Interactive Open-Ended Learning for 3D Object Recognition: An Approach and Experiments

Authors
Kasaei, SH; Oliveira, M; Lim, GH; Lopes, LS; Tome, AM;

Publication
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
3D object detection and recognition is increasingly used for manipulation and navigation tasks in service robots. It involves segmenting the objects present in a scene, estimating a feature descriptor for the object view and, finally, recognizing the object view by comparing it to the known object categories. This paper presents an efficient approach capable of learning and recognizing object categories in an interactive and open-ended manner. In this paper, "open-ended" implies that the set of object categories to be learned is not known in advance. The training instances are extracted from on-line experiences of a robot, and thus become gradually available over time, rather than at the beginning of the learning process. This paper focuses on two state-of-the-art questions: (1) How to automatically detect, conceptualize and recognize objects in 3D scenes in an open-ended manner? (2) How to acquire and use high-level knowledge obtained from the interaction with human users, namely when they provide category labels, in order to improve the system performance? This approach starts with a pre-processing step to remove irrelevant data and prepare a suitable point cloud for the subsequent processing. Clustering is then applied to detect object candidates, and object views are described based on a 3D shape descriptor called spin-image. Finally, a nearest-neighbor classification rule is used to predict the categories of the detected objects. A leave-one-out cross validation algorithm is used to compute precision and recall, in a classical off-line evaluation setting, for different system parameters. Also, an on-line evaluation protocol is used to assess the performance of the system in an open-ended setting. Results show that the proposed system is able to interact with human users, learning new object categories continuously over time.

2016

Incremental texture mapping for autonomous driving

Authors
Oliveira, M; Santos, V; Sappa, AD; Dias, P; Paulo Moreira, AP;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.

2016

Incremental scenario representations for autonomous driving using geometric polygonal primitives

Authors
Oliveira, M; Santos, V; Sappa, AD; Dias, P; Paulo Moreira, AP;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.

2015

Multimodal inverse perspective mapping

Authors
Oliveira, M; Santos, V; Sappa, AD;

Publication
Information Fusion

Abstract
Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints. © 2014 Elsevier B.V.

2016

Robotics: Using a Competition Mindset as a Tool for Learning ROS

Authors
Costa, V; Cunha, T; Oliveira, M; Sobreira, H; Sousa, A;

Publication
ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 1

Abstract
In this article, a course that explores the potential of learning ROS using a collaborative game world is presented. The competitive mindset and its origins are explored, and an analysis of a collaborative game is presented in detail, showing how some key design features lead participants to overcome the challenges proposed through cooperation and collaboration. The data analysis is supported through observation of two different game simulations: the first, where all competitors were playing solo, and the second, where the players were divided in groups of three. Lastly, the authors reflect on the potentials that this course provides as a tool for learning ROS.

  • 1
  • 4