Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Miguel Riem Oliveira

2011

Unsupervised Local Color Correction for Coarsely Registered Images

Authors
Oliveira, M; Sappa, AD; Santos, V;

Publication
2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)

Abstract
The current paper proposes a new parametric local color correction technique. Initially, several color transfer functions are computed from the output of the mean shift color segmentation algorithm. Secondly, color influence maps are calculated. Finally, the contribution of every color transfer function is merged using the weights from the color influence maps. The proposed approach is compared with both global and local color correction approaches. Results show that our method outperforms the technique ranked first in a recent performance evaluation on this topic. Moreover, the proposed approach is computed in about one tenth of the time.

2012

Color Correction for Onboard Multi-camera Systems using 3D Gaussian Mixture Models

Authors
Oliveira, M; Sappa, AD; Santos, V;

Publication
2012 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV)

Abstract
The current paper proposes a novel color correction approach for onboard multi-camera systems. It works by segmenting the given images into several regions. A probabilistic segmentation framework, using 3D Gaussian Mixture Models, is proposed. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. An image data set of road scenarios is used to establish a performance comparison of the proposed method with other seven well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.

2010

ATLASCAR - Technologies for a computer assisted driving system on board a common automobile

Authors
Santos, V; Almeida, J; Avila, E; Gameiro, D; Oliveira, M; Pascoal, R; Sabino, R; Stein, P;

Publication
IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC

Abstract
The future of intelligent vehicles will rely on robust information to allow the proper feedback to the vehicle itself, to issue several kinds of active safety, but before all, to generate information for the driver by calling his or her attention to potential instantaneous or mid-term risks associated with the driving. Before true vehicle autonomy, safety and driver assistance are a priority. Sophisticated sensorial and perceptive mechanisms must be made available for, in a first instance, assisting the driver and, on a latter phase, participate in better autonomy. These mechanisms rely on sensors and algorithms that are mostly available nowadays, but many of them are still unsuited for critical situations. This paper presents a project where engineering and scientific solutions have been devised to settle a full featured real scale platform for the next generation of ITS vehicles that are concerned with the immediate issues of navigation and challenges on the road. The car is now ready and running, and the data gathering has just begun. ©2010 IEEE.

2012

3D-2D Laser Range Finder Calibration Using a Conic Based Geometry Shape

Authors
Almeida, M; Dias, P; Oliveira, M; Santos, V;

Publication
IMAGE ANALYSIS AND RECOGNITION, PT I

Abstract
The AtlasCar is a prototype that is being developed at the University of Aveiro to research advanced driver assistance systems. The car is equipped with several sensors: 3D and 2D laser scanners, a stereo camera, inertial sensors and GPS. The combination of all these sensor data in useful representations is essential. Therefore, calibration is one of the first problems to tackle. This paper focuses on 3D/2D laser calibration. The proposed method uses a 3D Laser Range Finder (LRF) to produce a reference 3D point cloud containing a known calibration object. Manual input from the user and knowledge of the object geometry are used to register the 3D point cloud with the 2D Lasers. Experimental results with simulated and real data demonstrate the effectiveness of the proposed calibration method.

2012

Color Correction Using 3D Gaussian Mixture Models

Authors
Oliveira, M; Sappa, AD; Santos, V;

Publication
IMAGE ANALYSIS AND RECOGNITION, PT I

Abstract
The current paper proposes a novel color correction approach based on a probabilistic segmentation framework by using 3D Gaussian Mixture Models. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. The proposed approach is evaluated using both a recently published metric and two large data sets composed of seventy images. The evaluation is performed by comparing our algorithm with eight well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.

2014

Multimodal inverse perspective mapping

Authors
Oliveira, M; Santos, V; Sappa, AD;

Publication
Information Fusion

Abstract
Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints. © 2014 Elsevier B.V.

  • 5
  • 5