Water infrastructure operations can adapt to both short-term variability and long-term change. Studies that have leveraged climate information to reoperate infrastructure have yet to explore the direct use of spatially distributed information in operating policy training, which could enable learning from weather patterns associated with emerging risks—for example, flood and drought events associated with atmospheric rivers or high-pressure ridges, respectively, which result from co-occurring weather and climate patterns on multiple timescales. This study investigates the potential for spatial projections from large-ensemble climate models to directly inform reservoir operating policies using a deep reinforcement learning strategy, aiming to discover flexible, climate-informed policies without prior dimension reduction, which could cause loss of information. The approach is demonstrated for Folsom Reservoir in California. We investigate how learned policies interpret spatial climate information by connecting flood control and water supply shortage operations to the sensitivity and salience patterns associated with the input images. To assess the extent to which trained policies generalize to possible future climates, policies trained on historical data are tested on held-out scenarios drawn from the same period, and their performance is compared to flood and shortage scenarios drawn from a future period. Trained policies are robust to the variability present across climate model ensembles, demonstrate value in identifying spatial climate patterns for operations, and maintain the flexibility to dynamically adapt to climate change as it occurs, illustrating a broad benefit to global infrastructure systems facing climate risks.

Connecting Spatial Climate Information to Infrastructure Operations using Deep Reinforcement Learning

M. Giuliani;A. Castelletti
2021

Abstract

Water infrastructure operations can adapt to both short-term variability and long-term change. Studies that have leveraged climate information to reoperate infrastructure have yet to explore the direct use of spatially distributed information in operating policy training, which could enable learning from weather patterns associated with emerging risks—for example, flood and drought events associated with atmospheric rivers or high-pressure ridges, respectively, which result from co-occurring weather and climate patterns on multiple timescales. This study investigates the potential for spatial projections from large-ensemble climate models to directly inform reservoir operating policies using a deep reinforcement learning strategy, aiming to discover flexible, climate-informed policies without prior dimension reduction, which could cause loss of information. The approach is demonstrated for Folsom Reservoir in California. We investigate how learned policies interpret spatial climate information by connecting flood control and water supply shortage operations to the sensitivity and salience patterns associated with the input images. To assess the extent to which trained policies generalize to possible future climates, policies trained on historical data are tested on held-out scenarios drawn from the same period, and their performance is compared to flood and shortage scenarios drawn from a future period. Trained policies are robust to the variability present across climate model ensembles, demonstrate value in identifying spatial climate patterns for operations, and maintain the flexibility to dynamically adapt to climate change as it occurs, illustrating a broad benefit to global infrastructure systems facing climate risks.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1209041
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact