In scenarios, like critical public safety communication networks, On-Scene Available (OSA) user equipment (UE) may be only partially connected with the network infrastructure, e.g., due to physical damages or on-purpose deactivation by the authorities. In this work, we consider multi-hop Device-to-Device (D2D) communication in a hybrid infrastructure where OSA UEs connect to each other in a seamless manner in order to disseminate critical information to a deployed command center. The challenge that we address is to simultaneously keep the OSA UEs alive as long as possible and send the critical information to a final destination (e.g., a command center) as rapidly as possible, while considering the heterogeneous characteristics of the OSA UEs. We propose a dynamic adaptation approach based on machine learning to improve a joint energy-spectral efficiency (ESE). We apply a Q-learning scheme in a hybrid fashion (partially distributed and centralized) in learner agents (distributed OSA UEs) and scheduler agents (remote radio heads or RRHs), for which the next hop selection and RRH selection algorithms are proposed. Our simulation results show that the proposed dynamic adaptation approach outperforms the baseline system by approximately 67% in terms of joint energy-spectral efficiency, wherein the energy efficiency of the OSA UEs benefit from a gain of approximately 30%. Finally, the results show also that our proposed framework with C-RAN reduces latency by approximately 50% w.r.t. the baseline.

Q-learning based joint energy-spectral efficiency optimization in multi-hop device-to-device communication

Reggiani L.;Magarini M.
2020-01-01

Abstract

In scenarios, like critical public safety communication networks, On-Scene Available (OSA) user equipment (UE) may be only partially connected with the network infrastructure, e.g., due to physical damages or on-purpose deactivation by the authorities. In this work, we consider multi-hop Device-to-Device (D2D) communication in a hybrid infrastructure where OSA UEs connect to each other in a seamless manner in order to disseminate critical information to a deployed command center. The challenge that we address is to simultaneously keep the OSA UEs alive as long as possible and send the critical information to a final destination (e.g., a command center) as rapidly as possible, while considering the heterogeneous characteristics of the OSA UEs. We propose a dynamic adaptation approach based on machine learning to improve a joint energy-spectral efficiency (ESE). We apply a Q-learning scheme in a hybrid fashion (partially distributed and centralized) in learner agents (distributed OSA UEs) and scheduler agents (remote radio heads or RRHs), for which the next hop selection and RRH selection algorithms are proposed. Our simulation results show that the proposed dynamic adaptation approach outperforms the baseline system by approximately 67% in terms of joint energy-spectral efficiency, wherein the energy efficiency of the OSA UEs benefit from a gain of approximately 30%. Finally, the results show also that our proposed framework with C-RAN reduces latency by approximately 50% w.r.t. the baseline.
Device-to-device (D2D)
Internet of Things (IoT)
Joint energy-spectral efficiency (ESE)
Pervasive public safety communication
Public safety networks
File in questo prodotto:
File Dimensione Formato  
Dynamic_Adaptation_of_Joint_Energy_and_Spectral_Efficiency__NEW_VERSION_.pdf

accesso aperto

: Pre-Print (o Pre-Refereeing)
Dimensione 2.33 MB
Formato Adobe PDF
2.33 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1169304
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 2
social impact