Reliable and cost-efficient scheduling of pumps is an important task in the daily operations of urban water distribution networks (WDNs). In this work, we address the scheduling of variable-speed pumps using reinforcement learning (RL), which allows network controls to adapt to changes in demand in real-time after a data-driven training phase. Previous contributions have shown the general suitability of RL for control tasks in WDNs [1], [2]. However, most of them assume deterministically known demand patterns (cf. [1]) or consider uncertainty only for valve control (cf. [2]). As RL algorithms can handle uncertain environments, we explore their potential for dynamic scheduling of the network’s pumps under uncertain demand patterns. Our optimisation goal is to train a policy that complies with upper and lower pressure bounds at all nodes in the network while minimising the cost of pumping. To this end, we make use of the Soft Actor-Critic algorithm (SAC) [3]. Data for training and testing is collected using the EPANET simulator for two benchmark networks (Net1 and Anytown) with uncertainties applied to various network parameters. In all setups, the controller is trained without nodal demand information. Our study shows promising results for a pump scheduler that can reduce energy cost by a significant amount while complying with pressure bounds even for unseen scenarios.

Reinforcement Learning for Dynamic Pump Scheduling under Demand Uncertainty

Dennis Zanutto;
2025-01-01

Abstract

Reliable and cost-efficient scheduling of pumps is an important task in the daily operations of urban water distribution networks (WDNs). In this work, we address the scheduling of variable-speed pumps using reinforcement learning (RL), which allows network controls to adapt to changes in demand in real-time after a data-driven training phase. Previous contributions have shown the general suitability of RL for control tasks in WDNs [1], [2]. However, most of them assume deterministically known demand patterns (cf. [1]) or consider uncertainty only for valve control (cf. [2]). As RL algorithms can handle uncertain environments, we explore their potential for dynamic scheduling of the network’s pumps under uncertain demand patterns. Our optimisation goal is to train a policy that complies with upper and lower pressure bounds at all nodes in the network while minimising the cost of pumping. To this end, we make use of the Soft Actor-Critic algorithm (SAC) [3]. Data for training and testing is collected using the EPANET simulator for two benchmark networks (Net1 and Anytown) with uncertainties applied to various network parameters. In all setups, the controller is trained without nodal demand information. Our study shows promising results for a pump scheduler that can reduce energy cost by a significant amount while complying with pressure bounds even for unseen scenarios.
2025
21st Computing and Control in the Water Industry Conference (CCWI 2025)
Reinforcemnt Learning, Pump Scheduling, Demand Uncertainty
File in questo prodotto:
File Dimensione Formato  
Stahlhofen et al. 2025 - Reinforcement Learning for Dynamic Pump Scheduling under Demand Uncertainty.pdf

Accesso riservato

Descrizione: Full PDF
: Publisher’s version
Dimensione 303.08 kB
Formato Adobe PDF
303.08 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1295611
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact