2009
Authors
Sousa, R; Mora, B; Cardoso, JS;
Publication
EIGHTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, PROCEEDINGS
Abstract
In this work we consider the problem of binary classification where the classifier may abstain instead of classifying each observation, leaving the critical items for human evaluation. This article motivates and presents a novel method to learn the reject region on complex data. Observations are replicated and then a single binary classifier determines the decision plane. The proposed method is an extension of a method available in the literature for the classification of ordinal data. Our method is compared with standard techniques on synthetic and real datasets, emphasizing the advantages of the proposed approach.
2009
Authors
Oliveira, HP; Cardoso, JS;
Publication
VISAPP 2009: PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 2
Abstract
Media content adaptation is the action of transforming media files to adapt to device capabilities, usually related to mobile devices that require special handling because of their limited computational power, small screen size and constrained keyboard functionality. Image retargeting is one of such adaptations, transforming an image into another with different size. Tools allowing the author to imagery once and automatically retarget that imagery for a variety of different display devices are therefore of great interest. The performance of these algorithms is directly related with the preservation of the most important regions and features of the image. In this work, we introduce an algorithm for automatically retargeting images. We explore and extend a recently proposed algorithm on the literature. The central contribution is the introduction of the stable paths for image resizing, improving both the computational performance and the overall quality of the resulting image. The experimental results confirm the potential of the proposed algorithm.
2009
Authors
Cardoso, JD; Capela, A; Rebelo, A; Guedes, C; da Costa, JP;
Publication
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Abstract
The preservation of musical works produced in the past requires their digitalization and transformation into a machine-readable format. The processing of handwritten musical scores by computers remains far from ideal. One of the fundamental stages to carry out this task is the staff line detection. We investigate a general-purpose, knowledge-free method for the automatic detection of music staff lines based on a stable path approach. Lines affected by curvature, discontinuities, and inclination are robustly detected. Experimental results show that the proposed technique consistently outperforms well-established algorithms.
2009
Authors
Afonso, AP; Cardoso, JS; Cardoso, MJ; Cota, MP;
Publication
Actas da 4a Conferencia Iberica de Sistemas e Tecnologias de Informacao, CISTI 2009
Abstract
2009
Authors
Cardoso, JS; Carvalho, P; Teixeira, LF; Corte Real, L;
Publication
COMPUTER VISION AND IMAGE UNDERSTANDING
Abstract
The primary goal of the research on image segmentation is to produce better segmentation algorithms. In spite of almost 50 years of research and development in this Held, the general problem of splitting in image into meaningful regions remains unsolved. New and emerging techniques are constantly being applied with reduced Success. The design of each of these new segmentation algorithms requires spending careful attention judging the effectiveness of the technique. This paper demonstrates how the proposed methodology is well suited to perform a quantitative comparison between image segmentation algorithms using I ground-truth segmentation. It consists of a general framework already partially proposed in the literature, but dispersed over several works. The framework is based on the principle of eliminating the minimum number of elements Such that a specified condition is met. This rule translates directly into a global optimization procedure and the intersection-graph between two partitions emerges as the natural tool to solve it. The objective of this paper is to summarize, aggregate and extend the dispersed work. The principle is clarified, presented striped of unnecessary supports and extended to sequences of images. Our Study shows that the proposed framework for segmentation performance evaluation is simple, general and mathematically sound.
2009
Authors
Teixeira, LF; Corte Real, L;
Publication
PATTERN RECOGNITION LETTERS
Abstract
Object detection and tracking is an essential preliminary task in event analysis systems (e.g. visual surveillance). Typically objects are extracted and tagged, forming representative tracks of their activity. Tagging is Usually performed by probabilistic data association, however, in systems capturing disjoint areas it is often not possible to establish such associations, as data may have been collected at different times OF in different locations. In this case, appearance matching is a valuable aid. We propose using bag-of-visterms, i.e. an histogram of quantized local feature descriptors, to represent and match tracked objects. This method has proven to be effective for object matching and classification in image retrieval applications, where descriptors can be extracted a priori. An important difference in event analysis systems is that relevant information is typically restricted to the foreground. Descriptors can, therefore, be extracted faster, approaching real-time requirements. Also, unlike image retrieval, objects can change over time and therefore their model needs to be updated Continuously. Incremental or adaptive learning is used to tackle this problem. Using independent tracks of 30 different persons, we show that the bag-of-visterms representation effectively discriminates visual object tracks and that it presents high resilience to incorrect object segmentation. Additionally, this methodology allows the construction of scalable object models that can be used to match tracks across independent views.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.