Semi-Supervised Federated Learning (SSFL) aims to improve a pretrained model using unlabeled data from clients. Traditional SSFL solutions relying on pseudo-labels or autoencoders often struggle in the presence of domain shift, i.e. a difference in data distributions between the server and the clients. In this paper we present SemiFDA, the first solution to effectively handle domain shift in SSFL. After training an initial classifier on the server's labeled data, we establish an unsupervised learning process at clients to train feature extractors based on encoders. This process adopts a custom unsupervised loss function that promotes the clients' encoders to align their feature distributions with those extracted by the encoder at server. The updated encoders are then aggregated at the server using Federated Aver-aging and sent back for the next iteration, while the classification head remains frozen to preserve the benefits of aligning features locally. Furthermore, we design an experimental framework to mimic various levels of domain shift and test SSFL methods in real-world scenarios, including HAR and Digit Classification. Our results also demonstrate the detrimental effects of domain shift in SSFL and show that SemiFDA outperforms other solutions under these challenging conditions.

SemiFDA: Domain Adaptation in Semi-Supervised Federated Learning

Craighero, Michele;Rossi, Giorgio;Carrera, Diego;Stucchi, Diego;Fragneto, Pasqualina;Boracchi, Giacomo
2024-01-01

Abstract

Semi-Supervised Federated Learning (SSFL) aims to improve a pretrained model using unlabeled data from clients. Traditional SSFL solutions relying on pseudo-labels or autoencoders often struggle in the presence of domain shift, i.e. a difference in data distributions between the server and the clients. In this paper we present SemiFDA, the first solution to effectively handle domain shift in SSFL. After training an initial classifier on the server's labeled data, we establish an unsupervised learning process at clients to train feature extractors based on encoders. This process adopts a custom unsupervised loss function that promotes the clients' encoders to align their feature distributions with those extracted by the encoder at server. The updated encoders are then aggregated at the server using Federated Aver-aging and sent back for the next iteration, while the classification head remains frozen to preserve the benefits of aligning features locally. Furthermore, we design an experimental framework to mimic various levels of domain shift and test SSFL methods in real-world scenarios, including HAR and Digit Classification. Our results also demonstrate the detrimental effects of domain shift in SSFL and show that SemiFDA outperforms other solutions under these challenging conditions.
2024
Proceedings - IEEE International Conference on Data Mining, ICDM
domain shift
semi-supervised federated learning
File in questo prodotto:
File Dimensione Formato  
SemiFDA_Domain_Adaptation_in_Semi-Supervised_Federated_Learning.pdf

Accesso riservato

Dimensione 350.34 kB
Formato Adobe PDF
350.34 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1287196
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact