Federated Learning (FL) frameworks enable multiple clients to collaboratively train a Machine Learning (ML) model without requiring data to leave client devices, supporting applications in which privacy and data security are critical, such as healthcare and finance. In these systems, one of the first steps in the training process is sharing the model between the server and the clients, including both the architecture and the initial weights. However, this model-sharing step introduces a distinct attack surface, exposing FL systems to security threats such as malicious model serialization. This paper presents a systematic analysis of the security risks associated with model sharing in FL systems by examining commonly used techniques, tools, and deployment practices. We show that legacy model formats lacking built-in security mechanisms remain widely adopted, significantly increasing the attack surface, and that the growing popularity of model hubs further amplifies these risks by enabling large-scale distribution of malicious artifacts. While recent approaches have been proposed to improve model security, documented zero-day vulnerabilities demonstrate that the model-sharing process remains fragile in practice. By consolidating existing vulnerabilities and defenses, this work aims to raise awareness of the risks inherent to model sharing and to motivate the adoption of more secure model-sharing practices in privacy-sensitive FL deployments.

A Study on the Security Risks of Model Sharing in Federated Learning Systems

Marco Di Gennaro;Stefano Zanero;Michele Carminati
2026-01-01

Abstract

Federated Learning (FL) frameworks enable multiple clients to collaboratively train a Machine Learning (ML) model without requiring data to leave client devices, supporting applications in which privacy and data security are critical, such as healthcare and finance. In these systems, one of the first steps in the training process is sharing the model between the server and the clients, including both the architecture and the initial weights. However, this model-sharing step introduces a distinct attack surface, exposing FL systems to security threats such as malicious model serialization. This paper presents a systematic analysis of the security risks associated with model sharing in FL systems by examining commonly used techniques, tools, and deployment practices. We show that legacy model formats lacking built-in security mechanisms remain widely adopted, significantly increasing the attack surface, and that the growing popularity of model hubs further amplifies these risks by enabling large-scale distribution of malicious artifacts. While recent approaches have been proposed to improve model security, documented zero-day vulnerabilities demonstrate that the model-sharing process remains fragile in practice. By consolidating existing vulnerabilities and defenses, this work aims to raise awareness of the risks inherent to model sharing and to motivate the adoption of more secure model-sharing practices in privacy-sensitive FL deployments.
2026
Proceedings of the 19th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 1: BIOSIGNALS
978-989-758-802-0
Federated Learning, Machine Learning, Model Sharing, Software Security, Vulnerabilities, Deserialization Attacks
File in questo prodotto:
File Dimensione Formato  
A_Study_on_the_Security_Risks_of_Model_Sharing_in_Federated_Learning_Systems.pdf

accesso aperto

Descrizione: Paper
: Pre-Print (o Pre-Refereeing)
Dimensione 141.03 kB
Formato Adobe PDF
141.03 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1307508
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact