Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2021

Intelligent Systems Design and Applications

Authors
Abraham, A; Piuri, V; Gandhi, N; Siarry, P; Kaklauskas, A; Madureira, A;

Publication
Advances in Intelligent Systems and Computing

Abstract

2021

Intelligent Energy Systems Ontology: Local flexibility market and power system co-simulation demonstration

Authors
Santos, G; Morais, H; Pinto, T; Corchado, JM; Vale, Z;

Publication

Abstract

2021

PV Generation Forecasting Model for Energy Management in Buildings

Authors
Teixeira, B; Pinto, T; Faria, P; Vale, ZA;

Publication
Progress in Artificial Intelligence - 20th EPIA Conference on Artificial Intelligence, EPIA 2021, Virtual Event, September 7-9, 2021, Proceedings

Abstract

2021

Wind Speed Forecasting Using Feed-Forward Artificial Neural Network

Authors
Machado, EP; Morais, H; Pinto, T;

Publication
Distributed Computing and Artificial Intelligence, Volume 1: 18th International Conference, DCAI 2021, Salamanca, Spain, 6-8 October 2021.

Abstract

2021

Sparse Training Theory for Scalable and Efficient Agents

Authors
Mocanu, DC; Mocanu, E; Pinto, T; Curci, S; Nguyen, PH; Gibescu, M; Ernst, D; Vale, ZA;

Publication
AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021.

Abstract
A fundamental task for artificial intelligence is learning. Deep Neural Networks have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud, they suffer from computational and memory limitations, and they cannot be used to model adequately large physical worlds for agents which assume networks with billions of neurons. These issues are addressed in the last few years by the emerging topic of sparse training, which trains sparse networks from scratch. This paper discusses sparse training state-of-the-art, its challenges and limitations while introducing a couple of new theoretical research directions which has the potential of alleviating sparse training limitations to push deep learning scalability well beyond its current boundaries. Nevertheless, the theoretical advancements impact in complex multi-agents settings is discussed from a real-world perspective, using the smart grid case study.

2021

Electrical Load Demand Forecasting Using Feed-Forward Neural Networks

Authors
Machado, E; Pinto, T; Guedes, V; Morais, H;

Publication
ENERGIES

Abstract
The higher share of renewable energy sources in the electrical grid and the electrification of significant sectors, such as transport and heating, are imposing a tremendous challenge on the operation of the energy system due to the increase in the complexity, variability and uncertainties associated with these changes. The recent advances of computational technologies and the ever-growing data availability allowed the development of sophisticated and efficient algorithms that can process information at a very fast pace. In this sense, the use of machine learning models has been gaining increased attention from the electricity sector as it can provide accurate forecasts of system behaviour from energy generation to consumption, helping all the stakeholders to optimize their activities. This work develops and proposes a methodology to enhance load demand forecasts using a machine learning model, namely a feed-forward neural network (FFNN), by incorporating an error correction step that involves the prediction of the initial forecast errors by another FFNN. The results showed that the proposed methodology was able to significantly improve the quality of load demand forecasts, demonstrating a better performance than the benchmark models.

  • 182
  • 662