Task-Based Visual Attention for Continually Improving the Performance of Autonomous Game Agents


Creative Commons License

Ulu E., Çapın T. K., Çelikkale B., ÇELİKCAN U.

Electronics (Switzerland), vol.12, no.21, 2023 (SCI-Expanded) identifier

  • Publication Type: Article / Article
  • Volume: 12 Issue: 21
  • Publication Date: 2023
  • Doi Number: 10.3390/electronics12214405
  • Journal Name: Electronics (Switzerland)
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Communication Abstracts, INSPEC, Metadex, Directory of Open Access Journals, Civil Engineering Abstracts
  • Keywords: bottom-up and top-down visual attention, convolutional neural network, deep Q-learning, deep reinforcement learning, layer-wise relevance propagation, particle filter, saliency map
  • TED University Affiliated: Yes

Abstract

Deep Reinforcement Learning (DRL) has been effectively performed in various complex environments, such as playing video games. In many game environments, DeepMind’s baseline Deep Q-Network (DQN) game agents performed at a level comparable to that of humans. However, these DRL models require many experience samples to learn and lack the adaptability to changes in the environment and handling complexity. In this study, we propose Attention-Augmented Deep Q-Network (AADQN) by incorporating a combined top-down and bottom-up attention mechanism into the DQN game agent to highlight task-relevant features of input. Our AADQN model uses a particle-filter -based top-down attention that dynamically teaches an agent how to play a game by focusing on the most task-related information. In the evaluation of our agent’s performance across eight games in the Atari 2600 domain, which vary in complexity, we demonstrate that our model surpasses the baseline DQN agent. Notably, our model can achieve greater flexibility and higher scores at a reduced number of time steps.Across eight game environments, AADQN achieved an average relative improvement of 134.93%. Pong and Breakout games both experienced improvements of 9.32% and 56.06%, respectively. Meanwhile, SpaceInvaders and Seaquest, which are more intricate games, demonstrated even higher percentage improvements, with 130.84% and 149.95%, respectively. This study reveals that AADQN is productive for complex environments and produces slightly better results in elementary contexts.