A Motor Control Learning Framework for Cyber-Physical-Systems
Research output: Thesis › Master's Thesis
Standard
2022.
Research output: Thesis › Master's Thesis
Harvard
APA
Vancouver
Author
Bibtex - Download
}
RIS (suitable for import to EndNote) - Download
TY - THES
T1 - A Motor Control Learning Framework for Cyber-Physical-Systems
AU - Feith, Nikolaus
N1 - no embargo
PY - 2022
Y1 - 2022
N2 - A central problem in robotics is the description of the movement of a robot. This task is complex, especially for robots with high degrees of freedom. In the case of complex movements, they can no longer be programmed manually. Instead, they are taught to the robot utilizing machine learning. The Motor Control Learning framework presents an easy-to-use method for generating complex trajectories. Dynamic Movement Primitives is a method for describing movements as a non-linear dynamic system. Here, the trajectories are modelled by weighted basis functions, whereby the machine learning algorithms must determine only the respective weights. Thus, it is possible for complex movements to be defined by a few parameters. As a result, two motion learning methods were implemented. When imitating motion demonstrations, the weights are determined using regression methods. A reinforcement learning algorithm is used for policy optimization to generate waypoint trajectories. For this purpose, the weights are improved iteratively through a cost function using the covariance matrix adaptation evolution strategy. The generated trajectories were evaluated in experiments.
AB - A central problem in robotics is the description of the movement of a robot. This task is complex, especially for robots with high degrees of freedom. In the case of complex movements, they can no longer be programmed manually. Instead, they are taught to the robot utilizing machine learning. The Motor Control Learning framework presents an easy-to-use method for generating complex trajectories. Dynamic Movement Primitives is a method for describing movements as a non-linear dynamic system. Here, the trajectories are modelled by weighted basis functions, whereby the machine learning algorithms must determine only the respective weights. Thus, it is possible for complex movements to be defined by a few parameters. As a result, two motion learning methods were implemented. When imitating motion demonstrations, the weights are determined using regression methods. A reinforcement learning algorithm is used for policy optimization to generate waypoint trajectories. For this purpose, the weights are improved iteratively through a cost function using the covariance matrix adaptation evolution strategy. The generated trajectories were evaluated in experiments.
KW - Robotics
KW - Motor Control
KW - Dynamic Movement Primitives
KW - DMP
KW - Covariance Matrix Adaptation Evolution Strategy
KW - CMA-ES
KW - Reinforcement Learning
KW - Imitation Learning
KW - Robotics
KW - Motor Control
KW - Dynamic Movement Primitives
KW - DMP
KW - Covariance Matrix Adaptation Evolution Strategy
KW - CMA-ES
KW - Reinforcement Learning
KW - Imitation Learning
M3 - Master's Thesis
ER -