• Youri Coppens
  • Koichi Shirahata
  • Takuya Fukagai
  • Yasumoto Tomita
  • Atsushi Ike
Recent state-of-the-art Deep Reinforcement Learning algorithms, such as A3C and UNREAL, are designed to train on a single device with only CPU's. Using GPU acceleration for these algorithms results in low GPU utilization, which means the full performance of the GPU is not reached. Motivated by the architecture changes made by the GA3C algorithm, which gave A3C better GPU acceleration, together with the high learning efficiency of the UNREAL algorithm, this paper extends GA3C with the auxiliary tasks from UNREAL to create a Deep Reinforcement Learning algorithm, GUNREAL, with higher learning efficiency and also benefiting from GPU acceleration. We show that our GUNREAL system reaches higher scores on several games in the same amount of time than GA3C.
Original languageEnglish
Title of host publication2017 Fifth International Symposium on Computing and Networking (CANDAR)
Number of pages7
ISBN (Electronic)978-1-5386-2087-8
Publication statusPublished - 19 Nov 2017
Externally publishedYes
EventThe Second International Workshop on GPU Computing and Applications - ASPAM, Aomori, Japan
Duration: 19 Nov 201722 Nov 2017


WorkshopThe Second International Workshop on GPU Computing and Applications
Abbreviated titleGCA17
Internet address

ID: 39863565