Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CSE

2015

Summarization of changes in dynamic text collections using Latent Dirichlet Allocation model

Authors
Kar, M; Nunes, S; Ribeiro, C;

Publication
INFORMATION PROCESSING & MANAGEMENT

Abstract
In the area of Information Retrieval, the task of automatic text summarization usually assumes a static underlying collection of documents, disregarding the temporal dimension of each document. However, in real world settings, collections and individual documents rarely stay unchanged over time. The World Wide Web is a prime example of a collection where information changes both frequently and significantly over time, with documents being added, modified or just deleted at different times. In this context, previous work addressing the summarization of web documents has simply discarded the dynamic nature of the web, considering only the latest published version of each individual document. This paper proposes and addresses a new challenge - the automatic summarization of changes in dynamic text collections. In standard text summarization, retrieval techniques present a summary to the user by capturing the major points expressed in the most recent version of an entire document in a condensed form. In this new task, the goal is to obtain a summary that describes the most significant changes made to a document during a given period. In other words, the idea is to have a summary of the revisions made to a document over a specific period of time. This paper proposes different approaches to generate summaries using extractive summarization techniques. First, individual terms are scored and then this information is used to rank and select sentences to produce the final summary. A system based on Latent Dirichlet Allocation model (LDA) is used to find the hidden topic structures of changes. The purpose of using the LDA model is to identify separate topics where the changed terms from each topic are likely to carry at least one significant change. The different approaches are then compared with the previous work in this area. A collection of articles from Wikipedia, including their revision history, is used to evaluate the proposed system. For each article, a temporal interval and a reference summary from the article's content are selected manually. The articles and intervals in which a significant event occurred are carefully selected. The summaries produced by each of the approaches are evaluated comparatively to the manual summaries using ROUGE metrics. It is observed that the approach using the LDA model outperforms all the other approaches. Statistical tests reveal that the differences in ROUGE scores for the LDA-based approach is statistically significant at 99% over baseline.

2015

Integrated modeling of road environments for driving simulation

Authors
Campos, C; Leitao, JM; Coelho, AF;

Publication
GRAPP 2015 - 10th International Conference on Computer Graphics Theory and Applications; VISIGRAPP, Proceedings

Abstract
Virtual environments for driving simulations aimed to scientific purposes require three-dimensional road models that must obey to detailed standards of specification and realism. The creation of road models with this level of quality requires previous definition of the road networks and the road paths. Each road path is usually obtained through the dedicated work of roadway design specialists, resulting in a long time consuming process. The driving simulation for scientific purposes also requires a semantic description of all elements within the environment in order to provide the parameterization of actors during the simulation and the production of simulation reports. This paper presents a methodology to automatically generate road environments suitable to the implementation of driving simulation experiences. This methodology integrates every required step for modelling road environments, from the determination of interchanges nodes to the generation of the geometric and the semantic models. The human supervisor can interact with the model generation process at any stage, in order to meet every specific requirement of the experimental work. The proposed methodology reduces workload involved in the initial specification of the road network and significantly reduces the use of specialists for preparing the road paths of all roadways. The generated semantic description allows procedural placing of actors in the simulated environment. The models are suitable for conducting scientific work in a driving simulator. Copyright

2015

An automatic method for determining the anatomical relevant space for fast volumetric cardiac imaging

Authors
Ortega, A; Pedrosa, J; Heyde, B; Tong, L; D'Hooge, J;

Publication
2015 IEEE International Ultrasonics Symposium, IUS 2015

Abstract
Fast volumetric cardiac imaging requires to reduce the number of transmit events within a single volume. One way of achieving this is by limiting the field-of-view (FOV) of the recording to the anatomically relevant domain only (e.g. the myocardium when investigating cardiac mechanics). Although fully automatic solutions towards myocardial segmentation exist, translating that information in a fast ultrasound scan sequence is not trivial. The aim of this study was therefore to develop a methodology to automatically define the FOV from a volumetric dataset in the context of anatomical scanning. Hereto, a method is proposed where the anatomical relevant space is automatically identified as follows. First, the left ventricular myocardium is localized in the volumetric ultrasound recording using a fully automatic real-time segmentation framework (i.e. BEAS). Then, the extracted meshes are employed to define a binary mask identifying myocardial voxels only. Later, using these binary images, the percentage of pixels along a given image line that belong to the myocardium is calculated. Finally, a spatially continuous FOV that covers 'T' percentage of the myocardium is found by means of a ring-shaped template matching, giving as a result the opening angle and 'thickness' for a conical scan. This approach was tested on 27 volumetric ultrasound datasets, a T = 85% was used. The mean initial opening angle for a conical scan was of 19.67±8.53° while the mean 'thickness' of the cone was 19.01±3.35°. Therefore, a reduction of 48.99% in the number of transmit events was achieved, resulting in a frame rate gain factor of 1.96. As a conclusion, anatomical scanning in combination with new scanning sequences techniques can increase frame rate significantly while keeping information of the relevant structures for functional imaging. © 2015 IEEE.

2015

Transparent Acceleration of Program Execution Using Reconfigurable Hardware

Authors
Paulino, N; Ferreira, JC; Bispo, J; Cardoso, JMP;

Publication
2015 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE)

Abstract
The acceleration of applications, running on a general purpose processor (GPP), by mapping parts of their execution to reconfigurable hardware is an approach which does not involve program's source code and still ensures program portability over different target reconfigurable fabrics. However, the problem is very challenging, as suitable sequences of GPP instructions need to be translated/mapped to hardware, possibly at runtime. Thus, all mapping steps, from compiler analysis and optimizations to hardware generation, need to be both efficient and fast. This paper introduces some of the most representative approaches for binary acceleration using reconfigurable hardware, and presents our binary acceleration approach and the latest results. Our approach extends a GPP with a Reconfigurable Processing Unit (RPU), both sharing the data memory. Repeating sequences of GPP instructions are migrated to an RPU composed of functional units and interconnect resources, and able to exploit instruction-level parallelism, e.g., via loop pipelining. Although we envision a fully dynamic system, currently the RPU resources are selected and organized offline using execution trace information. We present implementation prototypes of the system on a Spartan-6 FPGA with a MicroBlaze as GPP and the very encouraging results achieved with a number of benchmarks.

2015

Documenting Software With Adaptive Software Artifacts

Authors
Correia, FF;

Publication

Abstract

2015

Service Response Time Measurement Model of Service Level Agreements in Cloud Environment

Authors
Costa, CM; Maia Leite, CRM; Sousa, AL;

Publication
2015 IEEE INTERNATIONAL CONFERENCE ON SMART CITY/SOCIALCOM/SUSTAINCOM (SMARTCITY)

Abstract
In cloud environments, resources should be acquired and released automatically and quickly at runtime. Therefore, ensuring the desired QoS is a great challenge for the cloud service provider. Moreover, it increases when we have large amount of data to be manipulated in this environment. Considering that, performance is an important requirement for most customers when they migrate their applications to the cloud. In this paper, we propose a model for measuring a Service Response Time estimated for different request types on large databases available in a cloud environment. This work allows the cloud service provider and its customers establish an appropriate SLA relative to performance expected of services available in the cloud. Finally, the model was evaluated in Amazon EC2 cloud infrastructure and the TPC-DS like benchmark was used for generating a database of structured data, considering that some cloud computing platforms support SQL queries directly or indirectly. This makes the proposed solution relevant for these kind of problems.

  • 175
  • 217