The exponential growth of the Internet of Things (IoT) has created an essential demand for Distributed Machine Learning (DML) systems. In this context, Federated Learning (FL) allows IoT devices to collaboratively train models while maintaining data ownership and privacy. Despite the evident advantages, FL faces practical challenges such as client selection and adaptation to heterogeneous data distributions. Recently, consensus-driven algorithms have been proposed to enable efficient and scalable FL without a central coordinating entity. Weighted Average Consensus (WAC) tools, primarily used in distributed signal processing, fail to address FL-specific challenges. The paper proposes a new family of server-less FL algorithms optimized to exploit WAC techniques. In particular, we propose an evolution of the centralized Federated Adaptive Weighting (FedAdp) method and present three distinct WAC schemes specifically designed for non-Independent and Identical Distributed (IID) data. Each scheme has a unique aggregation part that optimizes the weights of the clients' local models. The performances are evaluated in a real-world IoT system, analyzing their convergence properties in the context of heterogeneous client populations. Results show that the proposed algorithms outperform vanilla consensus FL up to 56% of accuracy and they are resilient to both label and sample data skewness.
Weighted Average Consensus Algorithms in Distributed and Federated Learning
Tedeschini, Bernardo Camajori;Savazzi, Stefano;Nicoli, Monica
2025-01-01
Abstract
The exponential growth of the Internet of Things (IoT) has created an essential demand for Distributed Machine Learning (DML) systems. In this context, Federated Learning (FL) allows IoT devices to collaboratively train models while maintaining data ownership and privacy. Despite the evident advantages, FL faces practical challenges such as client selection and adaptation to heterogeneous data distributions. Recently, consensus-driven algorithms have been proposed to enable efficient and scalable FL without a central coordinating entity. Weighted Average Consensus (WAC) tools, primarily used in distributed signal processing, fail to address FL-specific challenges. The paper proposes a new family of server-less FL algorithms optimized to exploit WAC techniques. In particular, we propose an evolution of the centralized Federated Adaptive Weighting (FedAdp) method and present three distinct WAC schemes specifically designed for non-Independent and Identical Distributed (IID) data. Each scheme has a unique aggregation part that optimizes the weights of the clients' local models. The performances are evaluated in a real-world IoT system, analyzing their convergence properties in the context of heterogeneous client populations. Results show that the proposed algorithms outperform vanilla consensus FL up to 56% of accuracy and they are resilient to both label and sample data skewness.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_Weighted Average Consensus Algorithms in Distributed and Federated Learning_postprint.pdf
accesso aperto
:
Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione
830.66 kB
Formato
Adobe PDF
|
830.66 kB | Adobe PDF | Visualizza/Apri |
|
2025_Weighted_Average_Consensus_Algorithms_in_Distributed_and_Federated_Learning.pdf
accesso aperto
:
Publisher’s version
Dimensione
4.06 MB
Formato
Adobe PDF
|
4.06 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


