Deep reinforcement learning based resource allocation for electric vehicle charging stations with priority service


Colak A., Fescioglu-Unver N.

Energy, vol.313, 2024 (SCI-Expanded) identifier

  • Publication Type: Article / Article
  • Volume: 313
  • Publication Date: 2024
  • Doi Number: 10.1016/j.energy.2024.133637
  • Journal Name: Energy
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Aquatic Science & Fisheries Abstracts (ASFA), CAB Abstracts, Communication Abstracts, Compendex, Computer & Applied Sciences, Environment Index, INSPEC, Metadex, Pollution Abstracts, Public Affairs Index, Veterinary Science Database, Civil Engineering Abstracts
  • Keywords: Deep Q-learning, Electric vehicle, Fast charging station, Priority service, Queue management, Resource allocation
  • TED University Affiliated: No

Abstract

The demand for public fast charging stations is increasing with the number of electric vehicles on roads. The charging queues and waiting times get longer, especially during the winter season and on holidays. Priority based service at charging stations can provide shorter delay times to vehicles willing to pay more and lower charging prices for vehicles accepting to wait more. Existing studies use classical feedback control and simulation based control methods to maintain the ratio of high and low priority vehicles’ delay times at the station's target level. Reinforcement learning has been used successfully for real time control in environments with uncertainties. This study proposes a deep Q-Learning based real time resource allocation model for priority service in fast charging stations (DRL-EXP). Results show that the deep learning approach enables DRL-EXP to provide a more stable and faster response than the existing models. DRL-EXP is also applicable to other priority based service systems that act under uncertainties and require real time control.