With the integration of artificial intelligence and traffic systems, intelligent traffic systems are utilizing enhanced perception coverage and computational capabilities to provide data -intensive solutions, achieving higher levels of performance than traditional systems. This paper combines the D3QN algorithm from deep reinforcement learning with practical issues and proposes an intelligent emergency traffic signal control system based on Deep Reinforcement Learning (DRL). The system takes into account pedestrian movement and utilizes real -time traffic data and environmental information to model traffic flow and road conditions within a novel state space. It employs the Dueling Double Deep Q-Network (D3QN) to optimize signal control strategies. The system dynamically adjusts signal timings to enhance operational efficiency at intersections. By using the Weibull distribution to simulate realistic traffic congestion and actual traffic data from Shanyin Road in Hangzhou for validation, the results demonstrate that this method converges faster and is more stable compared to other methods, significantly reducing traffic congestion. Furthermore, by incorporating pedestrian movement, this method reduces pedestrian waiting times by 44.736% during peak periods and 22.95% during off-peak periods, while maintaining comparable vehicle queue lengths, delay times, and carbon dioxide emissions. This approach shows the potential improvement of smart urban mobility and resolving intersection congestion challenges.

Intelligent emergency traffic signal control system with pedestrian access

Karimi, Hamid Reza
2024-01-01

Abstract

With the integration of artificial intelligence and traffic systems, intelligent traffic systems are utilizing enhanced perception coverage and computational capabilities to provide data -intensive solutions, achieving higher levels of performance than traditional systems. This paper combines the D3QN algorithm from deep reinforcement learning with practical issues and proposes an intelligent emergency traffic signal control system based on Deep Reinforcement Learning (DRL). The system takes into account pedestrian movement and utilizes real -time traffic data and environmental information to model traffic flow and road conditions within a novel state space. It employs the Dueling Double Deep Q-Network (D3QN) to optimize signal control strategies. The system dynamically adjusts signal timings to enhance operational efficiency at intersections. By using the Weibull distribution to simulate realistic traffic congestion and actual traffic data from Shanyin Road in Hangzhou for validation, the results demonstrate that this method converges faster and is more stable compared to other methods, significantly reducing traffic congestion. Furthermore, by incorporating pedestrian movement, this method reduces pedestrian waiting times by 44.736% during peak periods and 22.95% during off-peak periods, while maintaining comparable vehicle queue lengths, delay times, and carbon dioxide emissions. This approach shows the potential improvement of smart urban mobility and resolving intersection congestion challenges.
2024
Deep reinforcement learning
Traffic condition
Neural networks
Simulation of urban mobility
Traffic signal control
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1277659
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 9
social impact