Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2014

WindS@UP: The e-Science Platform for WindScanner.eu

Authors
Gomes, F; Lopes, JC; Palma, JL; Ribeiro, LF;

Publication
SCIENCE OF MAKING TORQUE FROM WIND 2014 (TORQUE 2014)

Abstract
The Wind Scanner e-Science platform architecture and the underlying premises are discussed. It is a collaborative platform that will provide a repository for experimental data and metadata. Additional data processing capabilities will be incorporated thus enabling in-situ data processing. Every resource in the platform is identified by a Uniform Resource Identifier (URI), enabling an unequivocally identification of the field(s) campaign(s) data sets and metadata associated with the data set or experience. This feature will allow the validation of field experiment results and conclusions as all managed resources will be linked. A centralised node (Hub) will aggregate the contributions of 6 to 8 local nodes from EC countries and will manage the access of 3 types of users: data-curator, data provider and researcher. This architecture was designed to ensure consistent and efficient research data access and preservation, and exploitation of new research opportunities provided by having this "Collaborative Data Infrastructure". The prototype platform-WindS@UP-enables the usage of the platform by humans via a Web interface or by machines using an internal API (Application Programming Interface). Future work will improve the vocabulary ("application profile") used to describe the resources managed by the platform.

2014

Late Breaking Papers of the 23rd International Conference on Inductive Logic Programming, Rio de Janeiro, Brazil, August 28th - to - 30th, 2013

Authors
Zaverucha, G; Costa, VS; Paes, AM;

Publication
ILP (Late Breaking Papers)

Abstract

2014

RNA-Seq Gene Profiling - A Systematic Empirical Comparison

Authors
Fonseca, NA; Marioni, J; Brazma, A;

Publication
PLOS ONE

Abstract
Accurately quantifying gene expression levels is a key goal of experiments using RNA-sequencing to assay the transcriptome. This typically requires aligning the short reads generated to the genome or transcriptome before quantifying expression of pre-defined sets of genes. Differences in the alignment/quantification tools can have a major effect upon the expression levels found with important consequences for biological interpretation. Here we address two main issues: do different analysis pipelines affect the gene expression levels inferred from RNA-seq data? And, how close are the expression levels inferred to the "true" expression levels? We evaluate fifty gene profiling pipelines in experimental and simulated data sets with different characteristics (e. g, read length and sequencing depth). In the absence of knowledge of the 'ground truth' in real RNAseq data sets, we used simulated data to assess the differences between the "true" expression and those reconstructed by the analysis pipelines. Even though this approach does not take into account all known biases present in RNAseq data, it still allows to estimate the accuracy of the gene expression values inferred by different analysis pipelines. The results show that i) overall there is a high correlation between the expression levels inferred by the best pipelines and the true quantification values; ii) the error in the estimated gene expression values can vary considerably across genes; and iii) a small set of genes have expression estimates with consistently high error (across data sets and methods). Finally, although the mapping software is important, the quantification method makes a greater difference to the results.

2014

Opportunistic application-level fault detection through adaptive redundant multithreading

Authors
Hukerikar, S; Diniz, PC; Lucas, RF; Teranishi, K;

Publication
Proceedings of the 2014 International Conference on High Performance Computing and Simulation, HPCS 2014

Abstract
As the scale and complexity of future High Performance Computing systems continues to grow, the rising frequency of faults and errors and their impact on HPC applications will make it increasingly difficult to accomplish useful computation. Traditional means of fault detection and correction are either hardware based or use software based redundancy. Redundancy based approaches usually entail complete replication of the program state or the computation and therefore incurs substantial overhead to application performance. Therefore, the wide-scale use of full redundancy in future exascale class systems is not a viable solution for error detection and correction. In this paper we present an application level fault detection approach that is based on adaptive redundant multithreading. Through a language level directive, the programmer can define structured code blocks. When these blocks are executed by multiple threads and their outputs compared, we can detect errors in specific parts of the program state that will ultimately determine the correctness of the application outcome. The compiler outlines such code blocks and a runtime system reasons whether their execution by redundant threads should enabled/disabled by continuously observing and learning about the fault tolerance state of the system. By providing flexible building blocks for application specific fault detection, our approach makes possible more reasonable performance overheads than full redundancy. Our results show that the overheads to application performance are in the range of 4% to 70% due to runtime system being continuously aware of the rate and source of system faults, rather than the usual overhead in the excess of 100% that is incurred by complete replication. © 2014 IEEE.

2014

Adaptive learning in agents behaviour: A framework for electricity markets simulation

Authors
Pinto, T; Vale, Z; Sousa, TM; Praca, I; Santos, G; Morais, H;

Publication
INTEGRATED COMPUTER-AIDED ENGINEERING

Abstract
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM (Multi-Agent System for Competitive Electricity Markets) is a multiagent electricity market simulator that models market players and simulates their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. This paper presents a methodology to provide decision support to electricity market negotiating players. This model allows integrating different strategic approaches for electricity market negotiations, and choosing the most appropriate one at each time, for each different negotiation context. This methodology is integrated in ALBidS (Adaptive Learning strategic Bidding System) - a multiagent system that provides decision support to MASCEM's negotiating agents so that they can properly achieve their goals. ALBidS uses artificial intelligence methodologies and data analysis algorithms to provide effective adaptive learning capabilities to such negotiating entities. The main contribution is provided by a methodology that combines several distinct strategies to build actions proposals, so that the best can be chosen at each time, depending on the context and simulation circumstances. The choosing process includes reinforcement learning algorithms, a mechanism for negotiating contexts analysis, a mechanism for the management of the efficiency/effectiveness balance of the system, and a mechanism for competitor players' profiles definition.

2014

A rich vehicle routing problem dealing with perishable food: a case study

Authors
Amorim, P; Parragh, SN; Sperandio, F; Almada Lobo, B;

Publication
TOP

Abstract
This paper presents a successful application of operations research techniques in guiding the decision making process to achieve a superior operational efficiency in core activities. We focus on a rich vehicle routing problem faced by a Portuguese food distribution company on a daily basis. This problem can be described as a heterogeneous fleet site dependent vehicle routing problem with multiple time windows. We use the adaptative large neighbourhood search framework, which has proven to be effective to solve a variety of different vehicle routing problems. Our plans are compared against those of the company and the impact that the proposed decision support tool may have in terms of cost savings is shown. The algorithm converges quickly giving the planner considerably more time to focus on value-added tasks, rather than manually correct the routing schedule. Moreover, contrarily to the necessary adaptation time of the planner, the tool is quite flexible in following market changes, such as the introduction of new customers or new products.

  • 2881
  • 4387