Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2021

Synthetic dataset to study breaks in the consumer's water consumption patterns

Authors
Santos, MC; Borges, AI; Carneiro, DR; Ferreira, FJ;

Publication
ICoMS 2021: 4th International Conference on Mathematics and Statistics, Paris, France, June 24 - 26, 2021

Abstract
Breaks in water consumption records can represent apparent losses which are generally associated with the volumes of water that are consumed but not billed. The detection of these losses at the appropriate time can have a significant economic impact on the water company's revenues. However, the real datasets available to test and evaluate the current methods on the detection of breaks are not always large enough or do not present abnormal water consumption patterns. This study proposes an approach to generate synthetic data of water consumption with structural breaks which follows the statistical proprieties of real datasets from a hotel and a hospital. The parameters of the best-fit probability distributions (gamma, Weibull, log-Normal, log-logistic, and exponential) to real water consumption data are used to generate the new datasets. Two decreasing breaks on the mean were inserted in each new dataset associated with one selected probability distribution for each study case with a time horizon of 914 days. Three different change point detection methods provided by the R packages strucchange and changepoint were evaluated making use of these new datasets. Based on Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) performance indices, a higher performance has been observed for the breakpoint method provided by the package strucchange.

2021

The Role of Collaboration for Sustainable and Efficient Urban Logistics

Authors
Carvalho, L; de Sousa, JF; de Sousa, JP;

Publication
BOOSTING COLLABORATIVE NETWORKS 4.0: 21ST IFIP WG 5.5 WORKING CONFERENCE ON VIRTUAL ENTERPRISES, PRO-VE 2020

Abstract
The scarcity of resources is one of the main concerns for the present and the future of the environment and society. The "load factor" in logistic transport has a great potential for improvement, especially in the last-mile deliveries, as the transport of goods is largely fragmented between several small companies using small vehicles. This paper investigates the potential for collaboration to increase efficiency in urban logistics. Based on an overview of the concepts and initiatives regarding vertical and horizontal collaboration, a research agenda is proposed.

2021

Optimal Peak Shaving Control Using Dynamic Demand and Feed-In Limits for Grid-Connected PV Sources With Batteries

Authors
Manojkumar, R; Kumar, C; Ganguly, S; Catalao, JPS;

Publication
IEEE SYSTEMS JOURNAL

Abstract
Peak shaving of utility grid power is an important application, which benefits both grid operators and end users. In this article, an optimal rule-based peak shaving control strategy with dynamic demand and feed-in limits is proposed for grid-connected photovoltaic (PV) systems with battery energy storage systems. A method to determine demand and feed-in limits depending on the day-ahead predictions of load demand and PV power profiles is developed. Furthermore, an optimal rule-based control strategy that determines day-ahead charge/discharge schedules of battery for peak shaving of utility grid power is proposed. The rules are formulated such that the peak utility grid demand and feed-in powers are limited to the corresponding demand and feed-in limits of the day, respectively, while ensuring that the state-of-charge (SoC) of the battery at the end of the day is the same as the SoC of the start of the day. The optimal inputs required for applying the proposed rule-based control strategy are determined using a genetic algorithm for minimizing peak energy drawn from the utility grid. The proposed control algorithm is tested for various PV power and load demand profiles using MATLAB.

2021

Extended Kalman Filter-Based Approach for Nodal Pricing in Active Distribution Networks

Authors
Sharifinia, S; Allahbakhshi, M; Arefi, MM; Tajdinian, M; Shafie khah, M; Niknam, T; Catalao, JPS;

Publication
IEEE SYSTEMS JOURNAL

Abstract
This article presents an analytical approach based on Extended Kalman Filter (EKF) for nodal pricing in distribution networks containing private distributed generation (DG). An appropriate nodal pricing policy can direct active distribution network (ADN) to optimal operation mode with minimum loss. However, there are several crucial challenges in nodal pricing model such as: equitable loss allocation between DGs, obtain minimum merchandising surplus (MS), and equitable distribution of remuneration between DGs, which is difficult to achieve these goals simultaneously. However, in the proposed method, the issue was embedded in the form of the EKF updates. The measurement update reduces the MS, and in the time update, DG's nodal prices as state variables are modified based on their contribution to the loss reduction. Therefore, all aspects of the problem are considered and modeled simultaneously, which will prepare a realistic state estimation tool for distribution companies in the next step of operation. The proposed method also has the ability to determine the nodal prices for distribution network buses in a wide range of power supply point prices (PSP), which other methods have been failed, especially at very low or high PSP prices. Eventually, using the new method will move system towards to the minimum possible losses with the equitable condition. The application of the proposed nodal pricing method is illustrated on 17-bus radial distribution test systems, and the results are compared with other methods.

2021

Using network features for credit scoring in microfinance

Authors
Paraíso, P; Ruiz, S; Gomes, P; Rodrigues, L; Gama, J;

Publication
Int. J. Data Sci. Anal.

Abstract

2021

HEP-Frame: Improving the efficiency of pipelined data transformation & filtering for scientific analyses

Authors
Pereira, A; Proenca, A;

Publication
COMPUTER PHYSICS COMMUNICATIONS

Abstract
Software to analyse very large sets of experimental data often relies on a pipeline of irregular computational tasks with decisions to remove irrelevant data from further processing. A user-centred framework was designed and deployed, HEP-Frame, which aids domain experts to develop applications for scientific data analyses and to monitor and control their efficient execution. The key feature of HEP-Frame is the performance portability of the code across different heterogeneous platforms, due to a novel adaptive multi-layer scheduler, seamlessly integrated into the tool, an approach not available in competing frameworks. The multi-layer scheduler transparently allocates parallel data/tasks across the available heteroge-neous resources, dynamically balances threads among data input and computational tasks, adaptively reorders in run-time the parallel execution of the pipeline stages for each data stream, respecting data dependencies, and efficiently manages the execution of library functions in accelerators. Each layer implements a specific scheduling strategy: one balances the execution of the computational stages of the pipeline, distributing the execution of the stages of the same or different dataset elements among the available computing threads; another controls the order of the pipeline stages execution, so that most data is filtered out earlier and later stages execute the computationally heavy tasks; yet another adaptively balances the automatically created threads among data input and the computational tasks, taking into account the requirements of each application. Simulated data analyses from sensors in the ATLAS Experiment at CERN evaluated the scheduler efficiency, on dual multicore Xeon servers with and without accelerators, and on servers with the many-core Intel KNL. Experimental results show significant improved performance of these data analyses due to HEP-Frame features and the codes scaled well on multiple servers. Results also show the improved HEP-Frame scheduler performance over the key competitor, the HEFT list scheduler. The best overall performance improvement over a real fine tuned sequential data analysis was impressive in both homogeneous and heterogeneous multicore servers and in many-core servers: 81x faster in the homogeneous 24+24 core Skylake server, 86x faster in the heterogeneous 12+12 core Ivy Bridge server with the Kepler GPU, and 252x faster in the 64-core KNL server. Program summary Program Title: HEP-Frame CPC Library link to program files: https://doi.org/10.17632/m2jwxshtfz.1 Licencing provisions: GPLv3 Programming language: C++. Supplementary material: The current HEP-Frame public release available at https://bitbucket.org/ ampereira/hep-frame/wiki/Home . Nature of problem: Scientific data analysis applications are often developed to process large amounts of data obtained through experimental measurements or Monte Carlo simulations, aiming to identify patterns in the data or to test and/or validate theories. These large inputs are usually processed by a pipeline of computational tasks that may filter out irrelevant data (a task and its filter is addressed as a proposition in this communication), preventing it from being processed by subsequent tasks in the pipeline. This data filtering, coupled with the fact that propositions may have different computational intensities, contribute to the irregularity of the pipeline execution. This can lead to scientific data analyses I/O-, memory-, or compute-bound performance limitations, depending on the implemented algorithms and input data. To allow scientists to process more data with more accurate results their code and data structures should be optimized for the computing resources they can access. Since the main goal of most scientists is to obtain results relevant to their scientific fields, often within strict deadlines, optimizing the performance of their applications is very time consuming and is usually overlooked. Scientists require a software framework to aid the design and development of efficient applications and to control their parallel execution on distinct computing platforms. Solution method: This work proposes HEP-Frame, a framework to aid the development and efficient execution of pipelined scientific analysis applications on homogeneous and heterogeneous servers. HEP-Frame is a user-centred framework to aid scientists to develop applications to analyse data from a large number of dataset elements, with a flexible pipeline of propositions. It not only stresses the interface to domain experts so that code is more robust and is developed faster, but it also aims high-performance portability across different types of parallel computing platforms and desirable sustainability features. This framework aims to provide efficient parallel code execution without requiring user expertise in parallel computing. Frameworks to aid the design and deployment of scientific code usually fall into two categories: (i) resource-centred, closer to the computing platforms, where execution efficiency and performance portability are the main goals, but forces developers to adapt their code to strict framework con-straints; (ii) user-centred, which stresses the interface to domain experts to improve their code development speed and robustness, aiming to provide desirable sustainability features but disregarding the execution performance. There are also a set of frameworks that merge these two categories (Liu et al., 2015 [1]; Deelman et al., 2015 [2]) for scientific computing. While they do not have steep learning curves, concessions have to be made to their ease of use to allow for their broader scope of targeted applications. HEP-Frame attempts to merge this gap, placing itself between a fully user-or resource-centred framework, so that users develop code quickly and do not have to worry about the computational efficiency of the code It handles (i) by ensuring efficient execution of applications according to their computational requirements and the available resources on the server through a multi-layer scheduler, while (ii) is addressed by automatically generating code skeletons and transparently managing the data structure and automating repetitive tasks. Additional comments: An early stage proof-of-concept was published in a conference proceedings (Pereira et al., 2015). However, the HEP-Frame version presented in this communication only shares a very small portion of the code related to the skeleton generation (less than 5% of the overall code), while the rest of the user interface, multi-layer scheduler, and parallelization strategies were completely redesigned and re-implemented.

  • 1004
  • 4212