Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Miguel Riem Oliveira

2015

A Probabilistic Approach for Color Correction in Image Mosaicking Applications

Autores
Oliveira, M; Domingo Sappa, AD; Santos, V;

Publicação
IEEE TRANSACTIONS ON IMAGE PROCESSING

Abstract
Image mosaicking applications require both geometrical and photometrical registrations between the images that compose the mosaic. This paper proposes a probabilistic color correction algorithm for correcting the photometrical disparities. First, the image to be color corrected is segmented into several regions using mean shift. Then, connected regions are extracted using a region fusion algorithm. Local joint image histograms of each region are modeled as collections of truncated Gaussians using a maximum likelihood estimation procedure. Then, local color palette mapping functions are computed using these sets of Gaussians. The color correction is performed by applying those functions to all the regions of the image. An extensive comparison with ten other state of the art color correction algorithms is presented, using two different image pair data sets. Results show that the proposed approach obtains the best average scores in both data sets and evaluation metrics and is also the most robust to failures.

2015

An Adaptive Object Perception System based on Environment Exploration and Bayesian Learning

Autores
Hamidreza Kasaei, SH; Oliveira, M; Lim, GH; Lopes, LS; Tome, AM;

Publicação
2015 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
Cognitive robotics looks at human cognition as a source of inspiration for automatic perception capabilities that will allow robots to learn and reason out how to behave in response to complex goals. For instance, humans learn to recognize object categories ceaselessly over time. This ability to refine knowledge from the set of accumulated experiences facilitates the adaptation to new environments. Inspired by such abilities, this paper proposes an efficient approach towards 3D object category learning and recognition in an interactive and open-ended manner. To achieve this goal, this paper focuses on two state-of-the-art questions: (i) How to use unsupervised object exploration to construct a dictionary of visual words for representing objects in a highly compact and distinctive way. (ii) How to learn incrementally probabilistic models of object categories to achieve adaptability. To examine the performance of the proposed approach, a quantitative evaluation and a qualitative analysis are used. The experimental results showed the fulfilling performance of this approach types of objects. The proposed system is able to interact with human users and learn new object categories over time.

2014

Grounding language in perception for scene conceptualization in autonomous robots

Autores
Dubba, KSR; De Oliveira, MR; Lim, GH; Kasaei, H; Lopes, LS; Tome, A; Cohn, AG;

Publicação
AAAI Spring Symposium - Technical Report

Abstract
In order to behave autonomously, it is desirable for robots to have the ability to use human supervision and learn from different input sources (perception, gestures, verbal and textual descriptions etc). In many machine learning tasks, the supervision is directed specifically towards machines and hence is straight forward clearly annotated examples. But this is not always very practical and recently it was found that the most preferred interface to robots is natural language. Also the supervision might only be available in a rather indirect form, which may be vague and incomplete. This is frequently the case when humans teach other humans since they may assume a particular context and existing world knowledge. We explore this idea here in the setting of conceptualizing objects and scene layouts. Initially the robot undergoes training from a human in recognizing some objects in the world and armed with this acquired knowledge it sets out in the world to explore and learn more higher level concepts like static scene layouts and environment activities. Here it has to exploit its learned knowledge and ground language into perception to use inputs from different sources that might have overlapping as well as novel information. When exploring, we assume that the robot is given visual input, without explicit type labels for objects, and also that it has access to more or less generic linguistic descriptions of scene layout. Thus our task here is to learn the spatial structure of a scene layout and simultaneously visual object models it was not trained on. In this paper, we present a cognitive architecture and learning framework for robot learning through natural human supervision and using multiple input sources by grounding language in perception. Copyright

2016

3D object perception and perceptual learning in the RACE project

Autores
Oliveira, M; Lopes, LS; Lim, GH; Hamidreza Kasaei, SH; Tome, AM; Chauhan, A;

Publicação
ROBOTICS AND AUTONOMOUS SYSTEMS

Abstract
This paper describes a 3D object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario. This system, developed within the scope of the European project RACE, integrates detection, tracking, learning and recognition of tabletop objects. Interaction capabilities were also developed to enable a human user to take the role of instructor and teach new object categories. Thus, the system learns in an incremental and open-ended way from user-mediated experiences. Based on the analysis of memory requirements for storing both semantic and perceptual data, a dual memory approach, comprising a semantic memory and a perceptual memory, was adopted. The perceptual memory is the central data structure of the described perception and learning system. The goal of this paper is twofold: on one hand, we provide a thorough description of the developed system, starting with motivations, cognitive considerations and architecture design, then providing details on the developed modules, and finally presenting a detailed evaluation of the system; on the other hand, we emphasize the crucial importance of the Point Cloud Library (PCL) for developing such system.(1)

2018

Towards lifelong assistive robotics: A tight coupling between object perception and manipulation

Autores
Hamidreza Kasaei, SH; Oliveira, M; Lim, GH; Lopes, LS; Tome, AM;

Publicação
NEUROCOMPUTING

Abstract
This paper presents an artificial cognitive system tightly integrating object perception and manipulation for assistive robotics. This is necessary for assistive robots, not only to perform manipulation tasks in a reasonable amount of time and in an appropriate manner, but also to robustly adapt to new environments by handling new objects. In particular, this system includes perception capabilities that allow robots to incrementally learn object categories from the set of accumulated experiences and reason about how to perform complex tasks. To achieve these goals, it is critical to detect, track and recognize objects in the environment as well as to conceptualize experiences and learn novel object categories in an open-ended manner, based on human-robot interaction. Interaction capabilities were developed to enable human users to teach new object categories and instruct the robot to perform complex tasks. A naive Bayes learning approach with a Bag-of-Words object representation are used to acquire and refine object category models. Perceptual memory is used to store object experiences, feature dictionary and object category models. Working memory is employed to support communication purposes between the different modules of the architecture. A reactive planning approach is used to carry out complex tasks. To examine the performance of the proposed architecture, a quantitative evaluation and a qualitative analysis are carried out. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform complex tasks.

2011

Unsupervised Local Color Correction for Coarsely Registered Images

Autores
Oliveira, M; Sappa, AD; Santos, V;

Publicação
2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)

Abstract
The current paper proposes a new parametric local color correction technique. Initially, several color transfer functions are computed from the output of the mean shift color segmentation algorithm. Secondly, color influence maps are calculated. Finally, the contribution of every color transfer function is merged using the weights from the color influence maps. The proposed approach is compared with both global and local color correction approaches. Results show that our method outperforms the technique ranked first in a recent performance evaluation on this topic. Moreover, the proposed approach is computed in about one tenth of the time.

  • 4
  • 5