From 59005c45dd5b469935746e428063f3311d0c4f98 Mon Sep 17 00:00:00 2001 From: Denny Britz Date: Thu, 7 Dec 2017 17:38:06 +0900 Subject: [PATCH] Hindsight Experience Replay --- .gitignore | 3 ++- README.md | 2 +- notes/hindsight-ep.md | 29 +++++++++++++++++++++++++++++ 3 files changed, 32 insertions(+), 2 deletions(-) create mode 100644 notes/hindsight-ep.md diff --git a/.gitignore b/.gitignore index 496ee2c..2608ec2 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,2 @@ -.DS_Store \ No newline at end of file +.DS_Store +.vscode \ No newline at end of file diff --git a/README.md b/README.md index b861b25..5dd761f 100644 --- a/README.md +++ b/README.md @@ -131,7 +131,7 @@ Weakly-Supervised Classification and Localization of Common Thorax Diseases [[CV - Emergence of Locomotion Behaviours in Rich Environments [[arXiv](https://arxiv.org/abs/1707.02286)] [[article](https://deepmind.com/blog/producing-flexible-behaviours-simulated-environments/)] - Learning human behaviors from motion capture by adversarial imitation [[arXiv](https://arxiv.org/abs/1707.02201)] [[article](https://deepmind.com/blog/producing-flexible-behaviours-simulated-environments/)] - Robust Imitation of Diverse Behaviors [[arXiv](https://deepmind.com/documents/95/diverse_arxiv.pdf)] [[article](https://deepmind.com/blog/producing-flexible-behaviours-simulated-environments/)] -- Hindsight Experience Replay [[arXiv](https://arxiv.org/abs/1707.01495)] +- [Hindsight Experience Replay](notes/hindsight-ep.md) [[arXiv](https://arxiv.org/abs/1707.01495)] - Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks [[arXiv](https://arxiv.org/abs/1707.01836)] [[article](https://stanfordmlgroup.github.io/projects/ecg/)] - End-to-End Learning of Semantic Grasping [[arXiv](https://arxiv.org/abs/1707.01932)] - ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games [[arXiv](https://arxiv.org/abs/1707.01067)] [[code](https://github.com/facebookresearch/ELF)] [[article](https://code.facebook.com/posts/132985767285406/introducing-elf-an-extensive-lightweight-and-flexible-platform-for-game-research/)] diff --git a/notes/hindsight-ep.md b/notes/hindsight-ep.md new file mode 100644 index 0000000..5e66e7a --- /dev/null +++ b/notes/hindsight-ep.md @@ -0,0 +1,29 @@ +## [Hindsight Experience Replay](https://arxiv.org/abs/1707.01495) + +TLDR; The authors present a novel way to deal with sparse rewards in Reinforcement Learning. The key idea (called HER, or Hindsight Experience Replay) is that when an agent does not achieve the desired goal during an episode, it still has learned to achieve *some other* goal, which it can learn about and generalize from. This is done by framing the RL problem in a multi-goal setting, and adding transitions with different goals (and rewards) to the experience buffer. When updating the policy, the additional goals with positive rewards lead to faster learning. Note that this requires an off-policy RL algorithm (such as Q-Learning). + +#### Key Points + +- Proper reward shaping can be difficult. Thus, it is important to develop algorithms that can learn from sparse binary reward signals. +- HER requires an off-policy Reinforcement Learning algorithm. For example, DQN, etc. +- Multi-Goal RL vs. "Standard RL" + - Policy depends on the goal + - Reward function depends on the goal + - Goal is sampled at the start of each episode +- HER + - Assume that the goal is some *state* that the agent can achieve + - Needs a way to sample/generate a set of additional goals for an episode (hyperparameter) + - For example: The goal is the last state visited in the episode + - Store transitions with newly sampled goals (in addition to the original goal) in the replay buffer + - Induces a form of implicit curriculum as goals become more difficult + - Because the agent becomes better over time, the states it visits become "more difficult" +- Experiments: Robot Arm simulation + - Clearly outperforms DDPG and DDPG with count-based exploration on binary rewards + - Works whether we care about a single or multiple goals + - Shows that shaped rewards may hinder exploration + +#### Notes/Questions + +- The idea that shaped rewards can hinder exploration is a good one, I really enjoyed that +- How does this approach relate to model-based learning. While there is no direct relationship you learn to generalize across goals - Learning about the environment can have a similar effect. +- Not really sold/convinced on the implicit curriculum learning. I see how it applies to some problems, but not to all. Just because an agent becomes better at achieving G, the states it visits are not necessarily more "difficult" to achieve. Maybe I'm missing something.