Data centers manage complexity by offloading simple, high-speed packet forwarding to the network fabric and rely on virtual switches (vSwitches) at end hosts to enforce complex policies—managing connectivity across physical interfaces, containers, and VMs. Since their inception, vSwitches have seen major performance optimizations, including wildcard caches [14], learned-index lookups [15, 16], and high hit-rate SmartNIC offloads [24, 25]. Yet, fast vSwitch policy updates have remained largely overlooked, long considered non-critical to performance. We argue that architectural shifts in vSwitch design (from N-table policies to single-table caching [14, 16, 19]) and infrastructure scaling—driven by rising link rates and increasingly dynamic update patterns from emerging workloads (e.g., distributed training [9, 12, 20, 23] and low-latency inference [6, 21])—have turned the bottom-up vSwitch update mechanism (Figure 1a) into a key bottleneck, limiting cache scalability and performance. To address this, we introduce Kairo, which recasts vSwitch cache maintenance as an instance of the Incremental View Maintenance (IVM) problem [4, 10], enabling efficient top-down updates that react only to rule changes (Figure 1b) rather than recomputing from scratch. We also outline the core challenges of applying IVM in this context.

Kairo – Incremental View Maintenance for Scalable Virtual Switch Caching

Antichi G.;
2025-01-01

Abstract

Data centers manage complexity by offloading simple, high-speed packet forwarding to the network fabric and rely on virtual switches (vSwitches) at end hosts to enforce complex policies—managing connectivity across physical interfaces, containers, and VMs. Since their inception, vSwitches have seen major performance optimizations, including wildcard caches [14], learned-index lookups [15, 16], and high hit-rate SmartNIC offloads [24, 25]. Yet, fast vSwitch policy updates have remained largely overlooked, long considered non-critical to performance. We argue that architectural shifts in vSwitch design (from N-table policies to single-table caching [14, 16, 19]) and infrastructure scaling—driven by rising link rates and increasingly dynamic update patterns from emerging workloads (e.g., distributed training [9, 12, 20, 23] and low-latency inference [6, 21])—have turned the bottom-up vSwitch update mechanism (Figure 1a) into a key bottleneck, limiting cache scalability and performance. To address this, we introduce Kairo, which recasts vSwitch cache maintenance as an instance of the Incremental View Maintenance (IVM) problem [4, 10], enabling efficient top-down updates that react only to rule changes (Figure 1b) rather than recomputing from scratch. We also outline the core challenges of applying IVM in this context.
2025
Caching
Incremental View Maintenance
Megaflow
Open vSwitch
Revalidation
Slow path
Virtual Switch
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1298266
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact