2014
Autores
Spek, S; Gonçalves, V; Rits, O; Altman, Z; Destré, C;
Publicação
IEEE Wireless Communications and Networking Conference, WCNC
Abstract
Developments in autonomic network management offer many promises, but its economic benefits are hard to assess. This paper proposes a method to calculate the OPEX gains of a typical network on the basis of a scenario, for which a management framework and autonomous mechanisms have been developed. It makes use of a novel approach, a Toy model, taking into account expert opinions as well as simulation results. The new approach allows assessment of the OPEX impact of both individual mechanisms as well as the overall impact, which, in the present scenario results in an expected OPEX saving of 11 to 13 per cent. © 2014 IEEE.
2014
Autores
Roque, LAC; Fontes, DBMM; Fontes, FACC;
Publicação
JOURNAL OF COMBINATORIAL OPTIMIZATION
Abstract
This work proposes a hybrid genetic algorithm (GA) to address the unit commitment (UC) problem. In the UC problem, the goal is to schedule a subset of a given group of electrical power generating units and also to determine their production output in order to meet energy demands at minimum cost. In addition, the solution must satisfy a set of technological and operational constraints. The algorithm developed is a hybrid biased random key genetic algorithm (HBRKGA). It uses random keys to encode the solutions and introduces bias both in the parent selection procedure and in the crossover strategy. To intensify the search close to good solutions, the GA is hybridized with local search. Tests have been performed on benchmark large-scale power systems. The computational results demonstrate that the HBRKGA is effective and efficient. In addition, it is also shown that it improves the solutions obtained by current state-of-the-art methodologies.
2014
Autores
Abreu, R; Cunha, J; Fernandes, JP; Martins, P; Perez, A; Saraiva, J;
Publicação
2014 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME)
Abstract
Despite being staggeringly error prone, spreadsheets are a highly flexible programming environment that is widely used in industry. In fact, spreadsheets are widely adopted for decision making, and decisions taken upon wrong (spreadsheet-based) assumptions may have serious economical impacts on businesses, among other consequences. This paper proposes a technique to automatically pinpoint potential faults in spreadsheets. It combines a catalog of spreadsheet smells that provide a first indication of a potential fault, with a generic spectrum-based fault localization strategy in order to improve (in terms of accuracy and false positive rate) on these initial results. Our technique has been implemented in a tool which helps users detecting faults. To validate the proposed technique, we consider a well-known and well-documented catalog of faulty spreadsheets. Our experiments yield two main results: we were able to distinguish between smells that can point to faulty cells from smells and those that are not capable of doing so; and we provide a technique capable of detecting a significant number of errors: two thirds of the cells labeled as faulty are in fact (documented) errors.
2014
Autores
Gama, J;
Publicação
ICT Innovations 2014 - World of Data, Ohrid, Macedonia, 1-4 October, 2014
Abstract
Machine learning studies automatic methods for acquisition of domain knowledge with the goal of improving systems performance as the result of experience. In the past two decades, machine learning research and practice has focused on batch learning usually with small data sets. The rationale behind this practice is that examples are generated at random accordingly to some stationary probability distribution. Most learners use a greedy, hill-climbing search in the space of models. They are prone to overfitting, local maximas, etc. Data are scarce and statistic estimates have high variance. A paradigmatic example is the TDIT algorithm to learn decision trees [14]. As the tree grows, less and fewer examples are available to compute the sufficient statistics, variance increase leading to model instability Moreover, the growing process re-uses the same data, exacerbating the overfitting problem. Regularization and pruning mechanisms are mandatory. © Springer International Publishing Switzerland 2015.
2014
Autores
Couceiro, MS; Martins, FML; Rocha, RP; Ferreira, NMF;
Publicação
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS
Abstract
The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization (PSO) using natural selection, or survival-of-the-fittest, to enhance the ability to escape from local optima. An extension of the DPSO to multi-robot applications has been recently proposed and denoted as Robotic Darwinian PSO (RDPSO), benefiting from the dynamical partitioning of the whole population of robots. Therefore, the RDPSO decreases the amount of required information exchange among robots, and is scalable to large populations of robots. This paper presents a stability analysis of the RDPSO to better understand the relationship between the algorithm parameters and the robot's convergence. Moreover, the analysis of the RDPSO is further extended for real robot constraints (e.g., robot dynamics, obstacles and communication constraints) and experimental assessment with physical robots. The optimal parameters are evaluated in groups of physical robots and a larger population of simulated mobile robots for different target distributions within larger scenarios. Experimental results show that robots are able to converge regardless of the RDPSO parameters within the defined attraction domain. However, a more conservative parametrization presents a significant influence on the convergence time. To further evaluate the herein proposed approach, the RDPSO is further compared with four state-of-the-art swarm robotic alternatives under simulation. It is observed that the RDPSO algorithm provably converges to the optimal solution faster and more accurately than the other approaches.
2014
Autores
Anugu, N; Garcia, P; Amorim, A; Gordo, P; Eisenhauer, F; Perrin, G; Brandner, W; Straubmeier, C; Perraut, K;
Publicação
ADAPTIVE OPTICS SYSTEMS IV
Abstract
The GRAVITY acquisition camera has four 9x9 Shack-Hartmann sensors operating in the near-infrared. It measures the slow variations of a quasi-distorted wavefront of four telescope beams simultaneously, by imaging the Galactic Center field. The Shack-Hartmann lenslet images of the Galactic Center are generated. Since the lenslet array images are filled with the crowded Galactic Center stellar field, an extended object, the local shifts of the distorted wavefront have to be estimated with a correlation algorithm. In this paper we report on the accuracy of six existing centroid algorithms for the Galactic Center stellar field. We show the VLTI tunnel atmospheric turbulence phases are reconstructed back with a precision of 100 nm at 2 s integration.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.