Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Interest
Topics
Details

Details

  • Name

    Miguel Riem Oliveira
  • Role

    External Research Collaborator
  • Since

    01st March 2015
Publications

2018

Towards lifelong assistive robotics: A tight coupling between object perception and manipulation

Authors
Hamidreza Kasaei, SH; Oliveira, M; Lim, GH; Lopes, LS; Tome, AM;

Publication
NEUROCOMPUTING

Abstract
This paper presents an artificial cognitive system tightly integrating object perception and manipulation for assistive robotics. This is necessary for assistive robots, not only to perform manipulation tasks in a reasonable amount of time and in an appropriate manner, but also to robustly adapt to new environments by handling new objects. In particular, this system includes perception capabilities that allow robots to incrementally learn object categories from the set of accumulated experiences and reason about how to perform complex tasks. To achieve these goals, it is critical to detect, track and recognize objects in the environment as well as to conceptualize experiences and learn novel object categories in an open-ended manner, based on human-robot interaction. Interaction capabilities were developed to enable human users to teach new object categories and instruct the robot to perform complex tasks. A naive Bayes learning approach with a Bag-of-Words object representation are used to acquire and refine object category models. Perceptual memory is used to store object experiences, feature dictionary and object category models. Working memory is employed to support communication purposes between the different modules of the architecture. A reactive planning approach is used to carry out complex tasks. To examine the performance of the proposed architecture, a quantitative evaluation and a qualitative analysis are carried out. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks.

2017

Special Issue on Autonomous Driving and Driver Assistance Systems

Authors
Santos, V; Sappa, AD; Oliveira, M;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract

2016

A Hybrid Top-Down Bottom-Up Approach for the Detection of Cuboid Shaped Objects

Authors
Arrais, R; Oliveira, M; Toscano, C; Veiga, G;

Publication
IMAGE ANALYSIS AND RECOGNITION (ICIAR 2016)

Abstract
While bottom-up approaches to object recognition are simple to design and implement, they do not yield the same performance as top-down approaches. On the other hand, it is not trivial to obtain a moderate number of plausible hypotheses to be efficiently verified by top-down approaches. To address these shortcomings, we propose a hybrid top-down bottom-up approach to object recognition where a bottom-up procedure that generates a set of hypothesis based on data is combined with a top-down process for evaluating those hypotheses. We use the recognition of rectangular cuboid shaped objects from 3D point cloud data as a benchmark problem for our research. Results obtained using this approach demonstrate promising recognition performances.

2016

Incremental texture mapping for autonomous driving

Authors
Oliveira, M; Santos, V; Sappa, AD; Dias, P; Moreira, AP;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.

2016

Incremental scenario representations for autonomous driving using geometric polygonal primitives

Authors
Oliveira, M; Santos, V; Sappa, AD; Dias, P; Moreira, AP;

Publication
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.