In recent years, automotive systems have been in tegrating Federated Learning (FL) tools to provide enhanced driving functionalities, exploiting sensor data at connected vehi cles to cooperatively learn assistance information for safety and maneuvering systems. Conventional FL policies require a central coordinator, namely a Parameter Server (PS), to orchestrate the learning process which limits the scalability and robustness of the training platform. Consensus-driven FL methods, on the other hand, enable fully decentralized learning implementations where vehicles mutually share the Machine Learning (ML) model parameters, possibly via Vehicle-to-Everything (V2X) networking, at the expense of larger communication resource consumption compared to vanilla FL approaches. This paper proposes a communication-efficient consensus-driven FL design tailored for the training of Deep Neural Networks (DNN) in vehicular networks. The vehicles taking part in the FL process independently select a pre-determined percentage of model pa rameters to be quantized and exchanged on each training round. The proposed technique is validated on a cooperative sensing use case where vehicles rely on Lidar point clouds to detect possible road objects/users in their surroundings via DNN. The validation considers latency, accuracy and communication efficiency trade offs. Experimental results highlight the impact of parameter selection and quantization on the communication overhead in varying settings.

Communication-efficient Distributed Learning in V2X Networks: Parameter Selection and Quantization

Luca Barbieri;Stefano Savazzi;Monica Nicoli
2022-01-01

Abstract

In recent years, automotive systems have been in tegrating Federated Learning (FL) tools to provide enhanced driving functionalities, exploiting sensor data at connected vehi cles to cooperatively learn assistance information for safety and maneuvering systems. Conventional FL policies require a central coordinator, namely a Parameter Server (PS), to orchestrate the learning process which limits the scalability and robustness of the training platform. Consensus-driven FL methods, on the other hand, enable fully decentralized learning implementations where vehicles mutually share the Machine Learning (ML) model parameters, possibly via Vehicle-to-Everything (V2X) networking, at the expense of larger communication resource consumption compared to vanilla FL approaches. This paper proposes a communication-efficient consensus-driven FL design tailored for the training of Deep Neural Networks (DNN) in vehicular networks. The vehicles taking part in the FL process independently select a pre-determined percentage of model pa rameters to be quantized and exchanged on each training round. The proposed technique is validated on a cooperative sensing use case where vehicles rely on Lidar point clouds to detect possible road objects/users in their surroundings via DNN. The validation considers latency, accuracy and communication efficiency trade offs. Experimental results highlight the impact of parameter selection and quantization on the communication overhead in varying settings.
2022
IEEE Global Communications Conference, GLOBECOM 2022
File in questo prodotto:
File Dimensione Formato  
CV_2022_Globecom_Communication-efficient Distributed Learning in V2X Networks.pdf

Accesso riservato

Descrizione: Full paper
: Pre-Print (o Pre-Refereeing)
Dimensione 1.7 MB
Formato Adobe PDF
1.7 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1221510
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 0
social impact