Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

I got a Ph.D. in Electrical and Computers Engineering, at Universidade of Porto, in 1994.

I am currently Associate Professor at the Electrical and Computers Engineering Department, Engineering Faculty of the University of Porto (FEUP), where I teach in the areas of communication systems and signal processing.

I am a Researcher at INESC TEC since 1985 and my research interests include image and video processing and computer vision.

Interest
Topics
Details

Details

  • Name

    Luís Corte Real
  • Role

    Senior Researcher
  • Since

    01st June 1985
  • Nationality

    Portugal
  • Contacts

    +351222094299
    luis.corte-real@inesctec.pt
003
Publications

2023

From a Visual Scene to a Virtual Representation: A Cross-Domain Review

Authors
Pereira, A; Carvalho, P; Pereira, N; Viana, P; Corte-Real, L;

Publication
IEEE ACCESS

Abstract
The widespread use of smartphones and other low-cost equipment as recording devices, the massive growth in bandwidth, and the ever-growing demand for new applications with enhanced capabilities, made visual data a must in several scenarios, including surveillance, sports, retail, entertainment, and intelligent vehicles. Despite significant advances in analyzing and extracting data from images and video, there is a lack of solutions able to analyze and semantically describe the information in the visual scene so that it can be efficiently used and repurposed. Scientific contributions have focused on individual aspects or addressing specific problems and application areas, and no cross-domain solution is available to implement a complete system that enables information passing between cross-cutting algorithms. This paper analyses the problem from an end-to-end perspective, i.e., from the visual scene analysis to the representation of information in a virtual environment, including how the extracted data can be described and stored. A simple processing pipeline is introduced to set up a structure for discussing challenges and opportunities in different steps of the entire process, allowing to identify current gaps in the literature. The work reviews various technologies specifically from the perspective of their applicability to an end-to-end pipeline for scene analysis and synthesis, along with an extensive analysis of datasets for relevant tasks.

2023

Synthesizing Human Activity for Data Generation

Authors
Romero, A; Carvalho, P; Corte-Real, L; Pereira, A;

Publication
JOURNAL OF IMAGING

Abstract
The problem of gathering sufficiently representative data, such as those about human actions, shapes, and facial expressions, is costly and time-consuming and also requires training robust models. This has led to the creation of techniques such as transfer learning or data augmentation. However, these are often insufficient. To address this, we propose a semi-automated mechanism that allows the generation and editing of visual scenes with synthetic humans performing various actions, with features such as background modification and manual adjustments of the 3D avatars to allow users to create data with greater variability. We also propose an evaluation methodology for assessing the results obtained using our method, which is two-fold: (i) the usage of an action classifier on the output data resulting from the mechanism and (ii) the generation of masks of the avatars and the actors to compare them through segmentation. The avatars were robust to occlusion, and their actions were recognizable and accurate to their respective input actors. The results also showed that even though the action classifier concentrates on the pose and movement of the synthetic humans, it strongly depends on contextual information to precisely recognize the actions. Generating the avatars for complex activities also proved problematic for action recognition and the clean and precise formation of the masks.

2022

Boosting color similarity decisions using the CIEDE2000_PF Metric

Authors
Pereira, A; Carvalho, P; Corte Real, L;

Publication
SIGNAL IMAGE AND VIDEO PROCESSING

Abstract
Color comparison is a key aspect in many areas of application, including industrial applications, and different metrics have been proposed. In many applications, this comparison is required to be closely related to human perception of color differences, thus adding complexity to the process. To tackle this, different approaches were proposed through the years, culminating in the CIEDE2000 formulation. In our previous work, we showed that simple color properties could be used to reduce the computational time of a color similarity decision process that employed this metric, which is recognized as having high computational complexity. In this paper, we show mathematically and experimentally that these findings can be adapted and extended to the recently proposed CIEDE2000 PF metric, which has been recommended by the CIE for industrial applications. Moreover, we propose new efficient models that not only achieve lower error rates, but also outperform the results obtained for the CIEDE2000 metric.

2020

Efficient CIEDE2000-Based Color Similarity Decision for Computer Vision

Authors
Pereira, A; Carvalho, P; Coelho, G; Corte Real, L;

Publication
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY

Abstract
Color and color differences are critical aspects in many image processing and computer vision applications. A paradigmatic example is object segmentation, where color distances can greatly influence the performance of the algorithms. Metrics for color difference have been proposed in the literature, including the definition of standards such as CIEDE2000, which quantifies the change in visual perception of two given colors. This standard has been recommended for industrial computer vision applications, but the benefits of its application have been impaired by the complexity of the formula. This paper proposes a new strategy that improves the usability of the CIEDE2000 metric when a maximum acceptable distance can be imposed. We argue that, for applications where a maximum value, above which colors are considered to be different, can be established, then it is possible to reduce the amount of calculations of the metric, by preemptively analyzing the color features. This methodology encompasses the benefits of the metric while overcoming its computational limitations, thus broadening the range of applications of CIEDE2000 in both the computer vision algorithms and computational resource requirements.

2020

Texture collinearity foreground segmentation for night videos

Authors
Martins, I; Carvalho, P; Corte Real, L; Luis Alba Castro, JL;

Publication
COMPUTER VISION AND IMAGE UNDERSTANDING

Abstract
One of the most difficult scenarios for unsupervised segmentation of moving objects is found in nighttime videos where the main challenges are the poor illumination conditions resulting in low-visibility of objects, very strong lights, surface-reflected light, a great variance of light intensity, sudden illumination changes, hard shadows, camouflaged objects, and noise. This paper proposes a novel method, coined COLBMOG (COLlinearity Boosted MOG), devised specifically for the foreground segmentation in nighttime videos, that shows the ability to overcome some of the limitations of state-of-the-art methods and still perform well in daytime scenarios. It is a texture-based classification method, using local texture modeling, complemented by a color-based classification method. The local texture at the pixel neighborhood is modeled as an..-dimensional vector. For a given pixel, the classification is based on the collinearity between this feature in the input frame and the reference background frame. For this purpose, a multimodal temporal model of the collinearity between texture vectors of background pixels is maintained. COLBMOG was objectively evaluated using the ChangeDetection.net (CDnet) 2014, Night Videos category, benchmark. COLBMOG ranks first among all the unsupervised methods. A detailed analysis of the results revealed the superior performance of the proposed method compared to the best performing state-of-the-art methods in this category, particularly evident in the presence of the most complex situations where all the algorithms tend to fail.

Supervised
thesis

2022

Synthesing Human Activity for Data Generation

Author
Ana Ysabella Rodrigues Romero

Institution
UP-FEUP

2022

Image Processing for Football Game Analysis

Author
Francisco Gonçalves Sousa

Institution
UP-FEUP

2022

Identification and extraction of floor planes for 3D representation

Author
Carlos Miguel Guerra Soeiro

Institution
UP-FEUP

2022

Video Based tracking for 3D Scene Analysis

Author
Américo José Rodrigues Pereira

Institution
UP-FEUP

2022

Segmentation and Extraction of Human Characteristics for 3D Video Synthesis

Author
André Filipe Cardoso Madureira

Institution
UP-FEUP