2024
Autores
Fonseca, T; Ferreira, LL; Cabral, B; Severino, R; Nweye, K; Ghose, D; Nagy, Z;
Publicação
CoRR
Abstract
Intelligent energy management strategies, such as Vehicle-to-Grid (V2G) and Grid-to-Vehicle (V1G) emerge as a potential solution to the Electric Vehicles’ (EVs) integration into the energy grid. These strategies promise enhanced grid resilience and economic benefits for both vehicle owners and grid operators. Despite the announced perspective, the adoption of these strategies is still hindered by an array of operational problems. Key among these is the lack of a simulation platform that allows to validate and refine V2G and V1G strategies. Including the development, training, and testing in the context of Energy Communities (ECs) incorporating multiple flexible energy assets. Addressing this gap, first we introduce the EVLearn, an open-source extension for the existing CityLearn simulation framework. EVLearn provides both V2G and V1G energy management simulation capabilities into the study of broader energy management strategies of CityLearn by modeling EVs, their charging infrastructure and associated energy flexibility dynamics. Results validated the extension of CityLearn, where the impact of these strategies is highlighted through a comparative simulation scenario. © The Author(s) 2025.
2024
Autores
Fonseca, T; Ferreira, L; Cabral, B; Severino, R; Praça, I;
Publicação
2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CONTROL, AND COMPUTING TECHNOLOGIES FOR SMART GRIDS, SMARTGRIDCOMM 2024
Abstract
The rising adoption rates and integration of Renewable Energy Sources (RES) and Electric Vehicles (EVs) into the energy grid introduces complex challenges, including the need to balance supply and demand and smooth peak consumptions. Addressing these challenges requires innovative solutions such as Demand Response (DR), Renewable Energy Communities (RECs), and more specifically for EVs, Vehicle-to-Grid (V2G). However, existing V2G approaches often fall short in real-world applicability, adaptability, and user engagement. To bridge this gap, this paper proposes EnergAIze, a Multi-Agent Reinforcement Learning (MARL) energy management algorithm leveraging the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. EnergAIze enables user-centric multi-objective energy management by allowing each prosumer to select from a range of personal management objectives. Additionally, it architects' data protection and ownership through decentralized deployment, where each prosumer can situate an energy management node directly at their own dwelling. The local node not only manages local EVs and other energy assets but also fosters REC wide optimization. EnergAIze is evaluated through a case study using the CityLearn framework. The results show reduction in peak loads, ramping, carbon emissions, and electricity costs at the REC level while optimizing for individual prosumers objectives.
2024
Autores
Nweye, K; Kaspar, K; Buscemi, G; Fonseca, T; Pinto, G; Ghose, D; Duddukuru, S; Pratapa, P; Li, H; Mohammadi, J; Ferreira, LL; Hong, TZ; Ouf, M; Capozzoli, A; Nagy, Z;
Publicação
JOURNAL OF BUILDING PERFORMANCE SIMULATION
Abstract
As more distributed energy resources become part of the demand-side infrastructure, quantifying their energy flexibility on a community scale is crucial. CityLearn v1 provided an environment for benchmarking control algorithms. However, there is no standardized environment utilizing realistic building-stock datasets for distributed energy resource control benchmarking without co-simulation or third-party frameworks. CityLearn v2 extends CityLearn v1 by providing a stand-alone simulation environment that leverages the End-Use Load Profiles for the U.S. Building Stock dataset to create grid-interactive communities for resilient, multi-agent, and objective control of distributed energy resources with dynamic occupant feedback. While the v1 environment used pre-simulated building thermal loads, the v2 environment uses data-driven thermal dynamics and eliminates the need for co-simulation with building energy performance software. This work details the v2 environment and provides application examples that use reinforcement learning control to manage battery energy storage system, vehicle-to-grid control, and thermal comfort during heat pump power modulation.
2025
Autores
Fonseca, T; Ferreira, LL; Cabral, B; Severino, R; Nweye, K; Ghose, D; Nagy, Z;
Publicação
Energy Inform.
Abstract
2025
Autores
Fonseca, T; Sousa, C; Venâncio, R; Pires, P; Severino, R; Rodrigues, P; Paiva, P; Ferreira, LL;
Publicação
CoRR
Abstract
2025
Autores
Gonçalves, J; Silva, M; Cabral, B; Dias, T; Maia, E; Praça, I; Severino, R; Ferreira, LL;
Publicação
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.