2016
Authors
Martin, OA; Correia, CM; Gendron, E; Rousset, G; Gratadour, D; Vidal, F; Morris, TJ; Basden, AG; Myers, RM; Neichel, B; Fusco, T;
Publication
ADAPTIVE OPTICS SYSTEMS V
Abstract
CANARY is an open-loop tomographic adaptive optics (AO) demonstrator that was designed for use at the 4.2m William Herschel Telescope (WHT) in La Palma. Gearing up to extensive statistical studies of high redshifted galaxies surveyed with Multi-Object Spectrographs (MOS), the demonstrator CANARY has been designed to tackle technical challenges related to open-loop Adaptive-Optics (AO) control with mixed Natural Guide Star (NGS) and Laser Guide Star (LGS) tomography. We have developed a Point Spread Function (PSF)-Reconstruction algorithm dedicated to MOAO systems using system telemetry to estimate the PSF potentially anywhere in the observed field, a prerequisite to deconvolve AO-corrected science observations in Integral Field Spectroscopy (IFS). Additionally the ability to accurately reconstruct the PSF is the materialization of the broad and fine-detailed understanding of the residual error contributors, both atmospheric and opto-mechanical. In this paper we compare the classical PSF-r approach from Véran (1) that we take as reference on-Axis using the truth-sensor telemetry to one tailored to atmospheric tomography by handling the off-Axis data only. We've post-processed over 450 on-sky CANARY data sets with which we observe 92% and 88% of correlation on respectively the reconstructed Strehl Ratio (SR)/Full Width at Half Maximum (FWHM) compared to the sky values. The reference method achieves 95% and 92.5% exploiting directly the measurements of the residual phase from the Canary Truth Sensor (TS).
2016
Authors
Battaglia, D; Borchardt, M; Patricio, L;
Publication
PRODUCT-SERVICE SYSTEMS ACROSS LIFE CYCLE
Abstract
This study analyses how drivers of PSS enable supplier companies to adoption integrated solutions in B2B relationships. Two case studies were performed in two large supplier companies that operate in different segments, which represents a significant Brazilian market share. The findings show that this adopted PSS strategies enable the two companies to operate their customer's systems and to price their offerings according to the established performance. The strategies adopted by the companies provide a more rigorous knowledge about the products and services, attention to promoting the buyers' support over the life cycle and promote the relationships with buyers. (C) 2016 The Authors. Published by Elsevier B.V.
2016
Authors
Paterakis, NG; Pappi, IN; Catalao, JPS; Erdinc, O;
Publication
2016 IEEE POWER AND ENERGY SOCIETY GENERAL MEETING (PESGM)
Abstract
In this paper, a novel real-time rolling horizon optimization framework for the optimal operation of a smart household is presented. A home energy management system (HEMS) model based on mixed-integer linear programming (MILP) is developed in order to minimize the energy procurement cost considering that the household is enrolled in a dynamic pricing tariff scheme. Several assets such as a photovoltaic (PV) installation, an electric vehicle (EV) and controllable appliances are considered. Additionally, the energy from the PV and the EV can be used either to satisfy the household demand or can be sold back to the grid. The uncertainty of the PV production is estimated using time-series models and performing forecasts on a rolling basis. Also, appropriate distribution is used in order to model the uncertainty related to the EV. Besides, several parameters can be updated in real-time in order to reflect changes in demand and consider the end-user's preferences. The optimization algorithm is executed on a regular basis in order to improve the results against uncertainty.
2016
Authors
Oliveira, JN; Miraldo, VC;
Publication
JOURNAL OF LOGICAL AND ALGEBRAIC METHODS IN PROGRAMMING
Abstract
Faced with the need to quantify software (un)reliability in the presence of faults, the semantics of state-based systems is urged to evolve towards quantified (e.g. probabilistic) nondeterminism. When one is approaching such semantics from a categorical perspective, this inevitably calls for some technical elaboration, in a monadic setting. This paper proposes that such an evolution be undertaken without sacrificing the simplicity of the original (qualitative) definitions, by keeping quantification implicit rather than explicit. The approach is a monad lifting strategy whereby, under some conditions, definitions can be preserved provided the semantics moves to another category. The technique is illustrated by showing how to introduce probabilism in an existing software component calculus, by moving to a suitable category of matrices and using linear algebra in the reasoning. The paper also addresses the problem of preserving monadic strength in the move from original to target (Kleisli) categories, a topic which bears relationship to recent studies in categorial physics.
2016
Authors
Pathak, AK; Bhardwaj, V; Gangwar, RK; Singh, VK;
Publication
Proceedings of the 2015 International Conference on Microwave and Photonics, ICMAP 2015
Abstract
In this paper we present a surface plasmon resonance (SPR) based fiber sensor for measurement of refractive index (RI) at different concentration of glycerol and acetone. The sensing head of fiber probe was fabricated by depositing aluminum (Al) on the unclad portion of multi-mode fiber (MMF). The experimental result shows the sensitivity obtained in power measurement of SPR fiber probe was -106.95 dBm/RIU and -408.90 dBm/RIU for glycerol and acetone. Due to the small size and good sensitivity low cost of our SPR based fiber sensor have many commercial and practical uses. © 2015 IEEE.
2016
Authors
Alberto Martinez Angeles, CA; Dutra, I; Costa, VS; Buenabad Chavez, J;
Publication
INDUCTIVE LOGIC PROGRAMMING, ILP 2015
Abstract
Markov Logic is an expressive and widely used knowledge representation formalism that combines logic and probabilities, providing a powerful framework for inference and learning tasks. Most Markov Logic implementations perform inference by transforming the logic representation into a set of weighted propositional formulae that encode a Markov network, the ground Markov network. Probabilistic inference is then performed over the grounded network. Constructing, simplifying, and evaluating the network are the main steps of the inference phase. As the size of a Markov network can grow rather quickly, Markov Logic Network (MLN) inference can become very expensive, motivating a rich vein of research on the optimization of MLN performance. We claim that parallelism can have a large role on this task. Namely, we demonstrate that widely available Graphics Processing Units (GPUs) can be used to improve the performance of a state-of-the-art MLN system, Tuffy, with minimal changes. Indeed, comparing the performance of our GPU-based system, TuGPU, to that of the Alchemy, Tuffy and RockIt systems on three widely used applications shows that TuGPU is up to 15x times faster than the other systems.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.