diff --git a/docs/agents.md b/docs/agents.md deleted file mode 100644 index fc870d3a9b7..00000000000 --- a/docs/agents.md +++ /dev/null @@ -1,33 +0,0 @@ -# Agents - -An "agent" describes the method of running an RL algorithm against an environment in the gym. The agent may contain the algorithm itself or simply provide an integration between an algorithm and the gym environments. - -## RandomAgent - -A sample agent located in this repo at `gym/examples/agents/random_agent.py`. This simple agent leverages the environments ability to produce a random valid action and does so for each step. - -## cem.py - -A generic Cross-Entropy agent located in this repo at `gym/examples/agents/cem.py`. This agent defaults to 10 iterations of 25 episodes considering the top 20% "elite". - -## dqn - -This is a very basic DQN (with experience replay) implementation, which uses OpenAI's gym environment and Keras/Theano neural networks. [/sherjilozair/dqn](https://github.com/sherjilozair/dqn) - -## Simple DQN - -Simple, fast and easy to extend DQN implementation using [Neon](https://github.com/NervanaSystems/neon) deep learning library. Comes with out-of-box tools to train, test and visualize models. For details see [this blog post](https://www.nervanasys.com/deep-reinforcement-learning-with-neon/) or check out the [repo](https://github.com/tambetm/simple_dqn). - -## AgentNet -A library that allows you to develop custom deep/convolutional/recurrent reinforcement learning agent with full integration with Theano/Lasagne. Also contains a toolkit for various reinforcement learning algorithms, policies, memory augmentations, etc. - - - The repo's here: [AgentNet](https://github.com/yandexdataschool/AgentNet) - - [A step-by-step demo for Atari SpaceInvaders ](https://github.com/yandexdataschool/AgentNet/blob/master/examples/Playing%20Atari%20with%20Deep%20Reinforcement%20Learning%20%28OpenAI%20Gym%29.ipynb) - -## rllab - -a framework for developing and evaluating reinforcement learning algorithms, fully compatible with OpenAI Gym. It includes a wide range of continuous control tasks plus implementations of many algorithms. [/rllab/rllab](https://github.com/rllab/rllab) - -## [keras-rl](https://github.com/matthiasplappert/keras-rl) - -[keras-rl](https://github.com/matthiasplappert/keras-rl) implements some state-of-the art deep reinforcement learning algorithms. It was built with OpenAI Gym in mind, and also built on top of the deep learning library [Keras](https://keras.io/) and utilises similar design patterns like callbacks and user-definable metrics. diff --git a/docs/readme.md b/docs/readme.md deleted file mode 100644 index f7bf0cbc6a9..00000000000 --- a/docs/readme.md +++ /dev/null @@ -1,11 +0,0 @@ -# Table of Contents - - - [Environments](environments.md) lists Gym environments to run your algorithms against. - - - [Creating your own Environments](creating-environments.md) how to create your own Gym environments. - - - [Wrappers](wrappers.md) list of general purpose wrappers for environments. These can perform pre/postprocessing on the data that is exchanged between the agent and the environment. - - - [Agents](agents.md) contains a listing of agents compatible with Gym environments. Agents facilitate the running of an algorithm against an environment. - - - [Miscellaneous](misc.md) is a collection of other value-add tools and utilities. These could be anything from a small convenience lib to a collection of video tutorials or a new language binding. diff --git a/docs/environments.md b/docs/third_party_environments.md similarity index 100% rename from docs/environments.md rename to docs/third_party_environments.md