2015
Autores
Jorge, T; Maia, F; Matos, M; Pereira, J; Oliveira, R;
Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Designing and implementing distributed systems is a hard endeavor, both at an abstract level when designing the system, and at a concrete level when implementing, debugging and evaluating it. This stems not only from the inherent complexity of writing and reasoning about distributed software, but also from the lack of tools for testing and evaluating it under realistic conditions. Moreover, the gap between the protocols’ specifications found on research papers and their implementations on real code is huge, leading to inconsistencies that often result in the implementation no longer following the specification. As an example, the specification of the popular Chord DHT comprises a few dozens of lines, while its Java implementation, OpenChord, is close to twenty thousand lines, excluding libraries. This makes it hard and error prone to change the implementation to reflect changes in the specification, regardless of programmers’ skill. Besides, critical behavior due to the unpredictable interleaving of operations and network uncertainty, can only be observed on a realistic setting, limiting the usefulness of simulation tools. We believe that being able to write an algorithm implementation very close to its specification, and evaluating it in a real environment is a big step in the direction of building better distributed systems. Our approach leverages the MINHA platform to offer a set of built in primitives that allows one to program very close to pseudo-code. This high level implementation can interact with off-the-shelf existing middleware and can be gradually replaced by a production-ready Java implementation. In this paper, we present the system design and showcase it using a well-known algorithm from the literature. © IFIP International Federation for Information Processing 2015.
2015
Autores
Solteiro Pires, EJS; de Moura Oliveira, PBD; Tenreiro Machado, JAT;
Publicação
2015 IEEE 9TH INTERNATIONAL WORKSHOP ON MULTIDIMENSIONAL (ND) SYSTEMS (NDS)
Abstract
Multidimensional systems, or n-D systems, are systems having several independent variables. Several topics, in particular stability, of n-D systems (n > 1) have attracted the interest of many researchers. The main reason, is because the extension stability theory of 1-D systems to systems with higher dimensions is not straightforward. In this paper, two adopted meta-heuristics algorithms are used for complementing the study of systems stability based on their polynomial characteristics over the variables boundaries. The two meta-heuristics are genetic algorithm and particle swarm optimization due to its popularity. Practical results of both meta-heuristics are compared and the better algorithm highlighted. The results demonstrate that meta-heuristics can be applied in studding multidimensional system stability.
2015
Autores
Santos, AS; Madureira, AM; Varela, MLR; Putnik, GD; Kays, HME; Karim, ANM;
Publicação
PROCEEDINGS OF THE 2015 10TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI 2015)
Abstract
Global competition nd the customers demand for customized products with shorter due dates, marked the roduction of the Extended Enterprise. In this Extended Manufacturing Environment (EME), lean, virtual, networked and distributed enterprises collaborate respond to the market demands. In this paper we study the influence of the batch size on Flexible Flow Shop makespan minimization problem FFC parallel to C-max for two multi-sites approaches, the FSBE (Flow Shop Based Factories) and the PMBF (Parallel-Machines Based Factories). The computational study demonstrates how the performance of the PMBF model decreases with the increase of batch size and determines the batch sizes in which the performance is similar.
2015
Autores
Xiao, XH; Peng, MF; Cardoso, JS; Tang, RJ; Zhou, YL;
Publicação
APPLIED PHYSICS A-MATERIALS SCIENCE & PROCESSING
Abstract
Micro-solder joint (MSJ) lifetime prediction methodology and failure analysis (FA) are to assess reliability by fatigue model with a series of theoretical calculations, numerical simulation and experimental method. Due to shortened time of solder joints on high-temperature, high-frequency sampling error that is not allowed in productions may exist in various models, including round-off error. Combining intermetallic compound (IMC) growth theory and the FA technology for the magnetic head in actual production, this thesis puts forward a new growth model to predict life expectancy for solder joint of the magnetic head. And the impact of IMC, generating from interface reaction between slider (magnetic head, usually be called slider) and bonding pad, on mechanical performance during aging process is analyzed in it. By further researching on FA of solder ball bonding, thesis chooses AuSn4 growth model that affects least to solder joint mechanical property to indicate that the IMC methodology is suitable to forecast the solder lifetime. And the diffusion constant under work condition 60 A degrees C is 0.015354; the solder lifetime t is 14.46 years .
2015
Autores
Silva, JMC; Carvalho, P; Lima, SR;
Publicação
2015 IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATION (ISCC)
Abstract
Understanding network workload through the characterization of network flows, being essential for assisting network management tasks, can benefit largely from traffic sampling as long as an accurate snapshot of network behavior is captured. This paper is devoted to evaluate the real applicability of using sampling to support flow analysis. Considering both classical and emerging sampling techniques, a comparative performance study is carried out to assess the accuracy of estimating flow parameters through sampling. After identifying the main building blocks of sampled-based measurements, a sampling framework has been implemented to provide a versatile and fair platform for carrying out the testing and comparison process. Through an encompassing coverage of representative sampling techniques, the present study aims to provide useful insights regarding the use of sampling in traffic flow analysis.
2015
Autores
Teixeira, JF; Couto, M;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE
Abstract
Text Mining has opened a vast array of possibilities concerning automatic information retrieval from large amounts of text documents. A variety of themes and types of documents can be easily analyzed. More complex features such as those used in Forensic Linguistics can gather deeper understanding from the documents, making possible performing difficult tasks such as author identification. In this work we explore the capabilities of simpler Text Mining approaches to author identification of unstructured documents, in particular the ability to distinguish poetic works from two of Fernando Pessoas' heteronyms: 'Alvaro de Campos and Ricardo Reis. Several processing options were tested and accuracies of 97% were reached, which encourage further developments.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.