Intrinsic motivation and mental replay enable efficient online adaptation in stochastic recurrent networks
Publikationen: Beitrag in Fachzeitschrift › Artikel › Forschung › (peer-reviewed)
Standard
in: Neural networks, Jahrgang 109.2019, Nr. January, 01.2019, S. 67-80.
Publikationen: Beitrag in Fachzeitschrift › Artikel › Forschung › (peer-reviewed)
Harvard
APA
Vancouver
Author
Bibtex - Download
}
RIS (suitable for import to EndNote) - Download
TY - JOUR
T1 - Intrinsic motivation and mental replay enable efficient online adaptation in stochastic recurrent networks
AU - Tanneberg, Daniel
AU - Peters, Jan
AU - Rueckert, Elmar
N1 - Publisher Copyright: © 2018 Elsevier Ltd
PY - 2019/1
Y1 - 2019/1
N2 - Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signal cognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We evaluate our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points.
AB - Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signal cognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We evaluate our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points.
KW - Autonomous robots
KW - Experience replay
KW - Intrinsic motivation
KW - Neural sampling
KW - Online learning
KW - Spiking recurrent networks
UR - http://www.scopus.com/inward/record.url?scp=85056187742&partnerID=8YFLogxK
UR - https://cps.unileoben.ac.at/wp/NeuralNetworks2018Tanneberg.pdf
U2 - 10.1016/j.neunet.2018.10.005
DO - 10.1016/j.neunet.2018.10.005
M3 - Article
C2 - 30408695
AN - SCOPUS:85056187742
VL - 109.2019
SP - 67
EP - 80
JO - Neural networks
JF - Neural networks
SN - 0893-6080
IS - January
ER -