The Edge-Cloud Continuum represents a paradigm shift in distributed computing, seamlessly integrating resources from cloud data centers to edge devices. However, orchestrating services across this heterogeneous landscape poses significant challenges, as it requires finding a delicate balance between different (and competing) objectives, including service acceptance probability, offered Quality-of-Service, and network energy consumption. To address this challenge, we propose leveraging Multi-Objective Reinforcement Learning (MORL) to approximate the full Pareto Front of service orchestration policies. In contrast to conventional solutions based on single-objective RL, a MORL approach allows a network operator to inspect all possible “optimal” trade-offs, and then decide a posteriori on the orchestration policy that best satisfies the system’s operational requirements. Specifically, we first conduct an extensive measurement study to accurately model the energy consumption of heterogeneous edge devices and servers under various workloads, alongside the resource consumption of popular cloud services. Then, we develop a set-based MORL policy for service orchestration that can adapt to arbitrary network topologies without the need for retraining. Illustrative numerical results against selected heuristics show that our MORL policy outperforms baselines by 30% on average over a broad set of objective preferences, and generalizes to network topologies up to 5x larger than training.

Scalable and Energy-Efficient Service Orchestration in the Edge-Cloud Continuum With Multi-Objective Reinforcement Learning

Di Cicco, Nicola;Tornatore, Massimo
2025-01-01

Abstract

The Edge-Cloud Continuum represents a paradigm shift in distributed computing, seamlessly integrating resources from cloud data centers to edge devices. However, orchestrating services across this heterogeneous landscape poses significant challenges, as it requires finding a delicate balance between different (and competing) objectives, including service acceptance probability, offered Quality-of-Service, and network energy consumption. To address this challenge, we propose leveraging Multi-Objective Reinforcement Learning (MORL) to approximate the full Pareto Front of service orchestration policies. In contrast to conventional solutions based on single-objective RL, a MORL approach allows a network operator to inspect all possible “optimal” trade-offs, and then decide a posteriori on the orchestration policy that best satisfies the system’s operational requirements. Specifically, we first conduct an extensive measurement study to accurately model the energy consumption of heterogeneous edge devices and servers under various workloads, alongside the resource consumption of popular cloud services. Then, we develop a set-based MORL policy for service orchestration that can adapt to arbitrary network topologies without the need for retraining. Illustrative numerical results against selected heuristics show that our MORL policy outperforms baselines by 30% on average over a broad set of objective preferences, and generalizes to network topologies up to 5x larger than training.
2025
edge-cloud continuum
energy profiling
multi-objective reinforcement learning
Service orchestration
File in questo prodotto:
File Dimensione Formato  
DiCicco_TNSM_2025.pdf

Accesso riservato

Descrizione: DiCicco_TNSM_25
: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 850.66 kB
Formato Adobe PDF
850.66 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1310590
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 1
social impact