Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics

Publikationen: Beitrag in FachzeitschriftArtikelForschung(peer-reviewed)

Standard

Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics. / Xue, Honghu; Hein, Benedikt; Bakr, Mohamed et al.
in: Applied Sciences : open access journal, Jahrgang 12.2022, Nr. 6, 3153, 19.03.2022.

Publikationen: Beitrag in FachzeitschriftArtikelForschung(peer-reviewed)

Vancouver

Author

Bibtex - Download

@article{36d2385fac834a7a90640cd87965c1bb,
title = "Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics",
abstract = "We propose a deep reinforcement learning approach for solving a mapless navigation problem in warehouse scenarios. In our approach, an automatic guided vehicle is equipped with two LiDAR sensors and one frontal RGB camera and learns to perform a targeted navigation task. The challenges reside in the sparseness of positive samples for learning, multi-modal sensor perception with partial observability, the demand for accurate steering maneuvers together with long training cycles. To address these points, we propose NavACL-Q as an automatic curriculum learning method in combination with a distributed version of the soft actor-critic algorithm. The performance of the learning algorithm is evaluated exhaustively in a different warehouse environment to validate both robustness and generalizability of the learned policy. Results in NVIDIA Isaac Sim demonstrates that our trained agent significantly outperforms the map-based navigation pipeline provided by NVIDIA Isaac Sim with an increased agent-goal distance of 3 m and a wider initial relative agent-goal rotation of approximately 45∘. The ablation studies also suggest that NavACL-Q greatly facilitates the whole learning process with a performance gain of roughly 40% compared to training with random starts and a pre-trained feature extractor manifestly boosts the performance by approximately 60%.",
keywords = "automatic curriculum learning, autonomous navigation, deep reinforcement learning, multi-modal sensor perception, Deep Learning, Machine learning, navigation, Warehouse",
author = "Honghu Xue and Benedikt Hein and Mohamed Bakr and Georg Schildbach and Bengt Abel and Elmar Rueckert",
note = "Publisher Copyright: {\textcopyright} 2022 by the authors. Licensee MDPI, Basel, Switzerland.",
year = "2022",
month = mar,
day = "19",
doi = "10.3390/app12063153",
language = "English",
volume = "12.2022",
journal = "Applied Sciences : open access journal",
issn = "2076-3417",
publisher = "Multidisciplinary Digital Publishing Institute (MDPI)",
number = "6",

}

RIS (suitable for import to EndNote) - Download

TY - JOUR

T1 - Using Deep Reinforcement Learning with Automatic Curriculum Learning for Mapless Navigation in Intralogistics

AU - Xue, Honghu

AU - Hein, Benedikt

AU - Bakr, Mohamed

AU - Schildbach, Georg

AU - Abel, Bengt

AU - Rueckert, Elmar

N1 - Publisher Copyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland.

PY - 2022/3/19

Y1 - 2022/3/19

N2 - We propose a deep reinforcement learning approach for solving a mapless navigation problem in warehouse scenarios. In our approach, an automatic guided vehicle is equipped with two LiDAR sensors and one frontal RGB camera and learns to perform a targeted navigation task. The challenges reside in the sparseness of positive samples for learning, multi-modal sensor perception with partial observability, the demand for accurate steering maneuvers together with long training cycles. To address these points, we propose NavACL-Q as an automatic curriculum learning method in combination with a distributed version of the soft actor-critic algorithm. The performance of the learning algorithm is evaluated exhaustively in a different warehouse environment to validate both robustness and generalizability of the learned policy. Results in NVIDIA Isaac Sim demonstrates that our trained agent significantly outperforms the map-based navigation pipeline provided by NVIDIA Isaac Sim with an increased agent-goal distance of 3 m and a wider initial relative agent-goal rotation of approximately 45∘. The ablation studies also suggest that NavACL-Q greatly facilitates the whole learning process with a performance gain of roughly 40% compared to training with random starts and a pre-trained feature extractor manifestly boosts the performance by approximately 60%.

AB - We propose a deep reinforcement learning approach for solving a mapless navigation problem in warehouse scenarios. In our approach, an automatic guided vehicle is equipped with two LiDAR sensors and one frontal RGB camera and learns to perform a targeted navigation task. The challenges reside in the sparseness of positive samples for learning, multi-modal sensor perception with partial observability, the demand for accurate steering maneuvers together with long training cycles. To address these points, we propose NavACL-Q as an automatic curriculum learning method in combination with a distributed version of the soft actor-critic algorithm. The performance of the learning algorithm is evaluated exhaustively in a different warehouse environment to validate both robustness and generalizability of the learned policy. Results in NVIDIA Isaac Sim demonstrates that our trained agent significantly outperforms the map-based navigation pipeline provided by NVIDIA Isaac Sim with an increased agent-goal distance of 3 m and a wider initial relative agent-goal rotation of approximately 45∘. The ablation studies also suggest that NavACL-Q greatly facilitates the whole learning process with a performance gain of roughly 40% compared to training with random starts and a pre-trained feature extractor manifestly boosts the performance by approximately 60%.

KW - automatic curriculum learning

KW - autonomous navigation

KW - deep reinforcement learning

KW - multi-modal sensor perception

KW - Deep Learning

KW - Machine learning

KW - navigation

KW - Warehouse

UR - http://www.scopus.com/inward/record.url?scp=85127489436&partnerID=8YFLogxK

U2 - 10.3390/app12063153

DO - 10.3390/app12063153

M3 - Article

AN - SCOPUS:85127489436

VL - 12.2022

JO - Applied Sciences : open access journal

JF - Applied Sciences : open access journal

SN - 2076-3417

IS - 6

M1 - 3153

ER -