본문 바로가기

Deep Learning/강화학습

[2019.07] Dynamics-Aware Unsupervised Discovery of Skills

728x90

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery.

Figure 2: The agent π interacts with the environment to produce a transition s → s'. Intrinsic reward is computed by computing the transition probability under q for the current skill z, compared to random samples from the prior p(z). The agent maximizes the intrinsic reward computed for a batch of episodes, while q maximizes the log-probability of the actual transitions of (s, z) → s'.

URL: arxiv.org/abs/1907.01657
Topic: Skill Discovery
Video: www.youtube.com/watch?v=HYEzHX6-fIA