diff --git a/notebooks/gym_tuto.ipynb b/notebooks/gym_tuto.ipynb
new file mode 100644
index 0000000000..898238fdbe
--- /dev/null
+++ b/notebooks/gym_tuto.ipynb
@@ -0,0 +1,894 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Gym environment with scikit-decide tutorial: Continuous Mountain Car\n",
+ "\n",
+ "In this notebook we will solve the continuous mountain car problem taken from [OpenAI Gym](https://gym.openai.com/), a toolkit for developing environments, usually to be solved by reinforcement learning algorithms.\n",
+ "Continuous Mountain Car, a standard testing domain in Reinforcement Learning (RL), is a problem in which an under-powered car must drive up a steep hill. Note that we use here the *continuous* version of the mountain car because \n",
+ "it has a shaped or dense reward (i.e. not sparse) which can be used successfully when solving, as opposed to the other \"Mountain Car\" environments. \n",
+ "\n",
+ "For reminder, a sparse reward is a reward which is null almost everywhere, whereas a dense or shaped reward has more meaningful values for most transitions.\n",
+ "\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ "\n",
+ "This problem has been chosen for three reasons:\n",
+ " - Show how scikit-decide can be used to solve Gym environments (the de-facto standard in the RL community),\n",
+ " - Highlight that by doing so, you will be able to use not only solvers from the RL community (like the ones in [stable_baselines3](https://github.com/DLR-RM/stable-baselines3) for example), but also other solvers coming from other communities like genetic programming and planning/search (use of an underlying search graph) that can be very efficient.\n",
+ "\n",
+ "Therefore in this notebook we will go through the following steps:\n",
+ " - Wrap a Gym environment in a scikit-decide domain;\n",
+ " - Use a classical RL algorithm like PPO to solve our problem;\n",
+ " - Give CGP (Cartesian Genetic Programming) a try on the same problem;\n",
+ " - Finally use IW (Iterated Width) coming from the planning community on the same problem."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from typing import Optional, Callable\n",
+ "from time import sleep\n",
+ "import os\n",
+ "\n",
+ "from IPython.display import clear_output\n",
+ "import matplotlib.pyplot as plt\n",
+ "from stable_baselines3 import PPO, SAC\n",
+ "import gym\n",
+ "\n",
+ "from skdecide.hub.solver.stable_baselines import StableBaseline\n",
+ "from skdecide import Solver\n",
+ "from skdecide.hub.domain.gym import (\n",
+ " GymDomain,\n",
+ " GymWidthDomain,\n",
+ " GymDiscreteActionDomain,\n",
+ " GymPlanningDomain,\n",
+ ")\n",
+ "from skdecide.hub.solver.iw import IW\n",
+ "from skdecide.hub.solver.cgp import CGP\n",
+ "\n",
+ "# choose standard matplolib inline backend to render plots\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "When running this notebook on remote servers like with Colab or Binder, rendering of gym environment will fail as no actual display device exists. Thus we need to start a virtual display to make it work."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "if \"DISPLAY\" not in os.environ:\n",
+ " import pyvirtualdisplay\n",
+ "\n",
+ " _display = pyvirtualdisplay.Display(visible=False, size=(1400, 900))\n",
+ " _display.start()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## About Continuous Mountain Car problem"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this a problem, an under-powered car must drive up a steep hill. \n",
+ "The agent (a car) is started at the bottom of a valley. For any given\n",
+ "state the agent may choose to accelerate to the left, right or cease\n",
+ "any acceleration.\n",
+ "\n",
+ "### Observations\n",
+ "\n",
+ "- Car Position [-1.2, 0.6]\n",
+ "- Car Velocity [-0.07, +0.07]\n",
+ "\n",
+ "### Action\n",
+ "- the power coefficient [-1.0, 1.0]\n",
+ "\n",
+ "\n",
+ "### Goal\n",
+ "The car position is more than 0.45.\n",
+ "\n",
+ "### Reward\n",
+ "\n",
+ "Reward of 100 is awarded if the agent reached the flag (position = 0.45) on top of the mountain.\n",
+ "Reward is decrease based on amount of energy consumed each step.\n",
+ "\n",
+ "### Starting State\n",
+ "The position of the car is assigned a uniform random value in [-0.6 , -0.4].\n",
+ "The starting velocity of the car is always assigned to 0.\n",
+ "\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Wrap Gym environment in a scikit-decide domain"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We choose the gym environment we would like to use."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ENV_NAME = \"MountainCarContinuous-v0\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We define a domain factory using `GymDomain` proxy available in scikit-decide which will wrap the Gym environment."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "domain_factory = lambda: GymDomain(gym.make(ENV_NAME))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Here is a screenshot of such an environment. \n",
+ "\n",
+ "Note: We close the domain straight away to avoid leaving the OpenGL pop-up window open on local Jupyter sessions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "domain = domain_factory()\n",
+ "domain.reset()\n",
+ "plt.imshow(domain.render(mode=\"rgb_array\"))\n",
+ "plt.axis(\"off\")\n",
+ "domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Solve with Reinforcement Learning (StableBaseline + PPO)\n",
+ "\n",
+ "We first try a solver coming from the Reinforcement Learning community that is make use of OpenAI [stable_baselines3](https://github.com/DLR-RM/stable-baselines3), which give access to a lot of RL algorithms.\n",
+ "\n",
+ "Here we choose [Proximal Policy Optimization (PPO)](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) solver. It directly optimizes the weights of the policy network using stochastic gradient ascent. See more details in stable baselines [documentation](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) and [original paper](https://arxiv.org/abs/1707.06347). "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Check compatibility\n",
+ "We check the compatibility of the domain with the chosen solver."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "domain = domain_factory()\n",
+ "assert StableBaseline.check_domain(domain)\n",
+ "domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Solver instantiation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "solver = StableBaseline(\n",
+ " SAC, \"MlpPolicy\", learn_config={\"total_timesteps\": 50000}, verbose=True\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Training solver on domain"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "GymDomain.solve_with(solver, domain_factory)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Rolling out a solution\n",
+ "\n",
+ "We can use the trained solver to roll out an episode to see if this is actually solving the problem at hand.\n",
+ "\n",
+ "For educative purpose, we define here our own rollout (which will probably be needed if you want to actually use the solver in a real case). If you want to take a look at the (more complex) one already implemented in the library, see the `rollout()` function in [utils.py](https://github.com/airbus/scikit-decide/blob/master/skdecide/utils.py) module.\n",
+ "\n",
+ "By default we display the solution in a matplotlib figure. If you need only to check wether the goal is reached or not, you can specify `render=False`. In this case, the rollout is greatly speed up and a message is still printed at the end of process specifying success or not, with the number of steps required."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def rollout(\n",
+ " domain: GymDomain,\n",
+ " solver: Solver,\n",
+ " max_steps: int,\n",
+ " pause_between_steps: Optional[float] = 0.01,\n",
+ " render: bool = True,\n",
+ "):\n",
+ " \"\"\"Roll out one episode in a domain according to the policy of a trained solver.\n",
+ "\n",
+ " Args:\n",
+ " domain: the maze domain to solve\n",
+ " solver: a trained solver\n",
+ " max_steps: maximum number of steps allowed to reach the goal\n",
+ " pause_between_steps: time (s) paused between agent movements.\n",
+ " No pause if None.\n",
+ " render: if True, the rollout is rendered in a matplotlib figure as an animation;\n",
+ " if False, speed up a lot the rollout.\n",
+ "\n",
+ " \"\"\"\n",
+ " # Initialize episode\n",
+ " solver.reset()\n",
+ " observation = domain.reset()\n",
+ "\n",
+ " # Initialize image\n",
+ " if render:\n",
+ " plt.ioff()\n",
+ " fig, ax = plt.subplots(1)\n",
+ " ax.axis(\"off\")\n",
+ " plt.ion()\n",
+ " img = ax.imshow(domain.render(mode=\"rgb_array\"))\n",
+ " display(fig)\n",
+ "\n",
+ " # loop until max_steps or goal is reached\n",
+ " for i_step in range(1, max_steps + 1):\n",
+ " if pause_between_steps is not None:\n",
+ " sleep(pause_between_steps)\n",
+ "\n",
+ " # choose action according to solver\n",
+ " action = solver.sample_action(observation)\n",
+ " # get corresponding action\n",
+ " outcome = domain.step(action)\n",
+ " observation = outcome.observation\n",
+ "\n",
+ " # update image\n",
+ " if render: \n",
+ " img.set_data(domain.render(mode=\"rgb_array\"))\n",
+ " fig.canvas.draw()\n",
+ " clear_output(wait=True)\n",
+ " display(fig)\n",
+ "\n",
+ " # final state reached?\n",
+ " if outcome.termination:\n",
+ " break\n",
+ "\n",
+ " # close the figure to avoid jupyter duplicating the last image\n",
+ " if render:\n",
+ " plt.close(fig)\n",
+ "\n",
+ " # goal reached?\n",
+ " is_goal_reached = observation[0] >= 0.45\n",
+ " if is_goal_reached:\n",
+ " print(f\"Goal reached in {i_step} steps!\")\n",
+ " else:\n",
+ " print(f\"Goal not reached after {i_step} steps!\")\n",
+ "\n",
+ " return is_goal_reached, i_step"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We create a domain for the roll out and close it at the end. If not closing it, an OpenGL popup windows stays open, at least on local Jupyter sessions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": false
+ },
+ "outputs": [],
+ "source": [
+ "domain = domain_factory()\n",
+ "try:\n",
+ " rollout(domain=domain, solver=solver, max_steps=999, pause_between_steps=None, render=True)\n",
+ "finally:\n",
+ " domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can see that PPO does not find a solution to the problem. This is mainly due to the way the reward is computed. Indeed negative reward accumulates as long as the goal is not reached, which encourages the agent to stop moving.\n",
+ "\n",
+ "Actually, typical RL algorithms like PPO are a good fit for domains with \"well-shaped\" rewards (guiding towards the goal), but can struggle in sparse or \"badly-shaped\" reward environment like Mountain Car Continuous. \n",
+ "\n",
+ "We will see in the next sections that non-RL methods can overcome this issue."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Cleaning up"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Some solvers need proper cleaning before being deleted."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "solver._cleanup()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that this is automatically done if you use the solver within a `with` statement. The syntax would look something like:\n",
+ "\n",
+ "```python\n",
+ "with solver_factory() as solver:\n",
+ " MyDomain.solve_with(solver, domain_factory)\n",
+ " rollout(domain=domain, solver=solver\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Solve with Cartesian Genetic Programming (CGP)\n",
+ "\n",
+ "CGP (Cartesian Genetic Programming) is a form of genetic programming that uses a graph representation (2D grid of nodes) to encode computer programs.\n",
+ "See [Miller, Julian. (2003). Cartesian Genetic Programming. 10.1007/978-3-642-17310-3.](https://www.researchgate.net/publication/2859242_Cartesian_Genetic_Programming) for more details.\n",
+ "\n",
+ "Pros:\n",
+ "+ ability to customize the set of atomic functions used by CPG (e.g. to inject some domain knowledge)\n",
+ "+ ability to inspect the final formula found by CGP (no black box)\n",
+ "\n",
+ "Cons:\n",
+ "- the fitness function of CGP is defined by the rewards, so can be unable to solve in sparse reward scenarios"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Check compatibility\n",
+ "We check the compatibility of the domain with the chosen solver."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "domain = domain_factory()\n",
+ "assert CGP.check_domain(domain)\n",
+ "domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Solver instantiation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "solver = CGP(\"TEMP_CGP\", n_it=25, verbose=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Training solver on domain"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "GymDomain.solve_with(solver, domain_factory)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Rolling out a solution"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We use the same roll out function as for PPO solver."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": false
+ },
+ "outputs": [],
+ "source": [
+ "domain = domain_factory()\n",
+ "try:\n",
+ " rollout(domain=domain, solver=solver, max_steps=999, pause_between_steps=None, render=True)\n",
+ "finally:\n",
+ " domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "CGP seems doing well on this problem. Indeed the presence of periodic functions ($asin$, $acos$, and $atan$) in its base set of atomic functions makes it suitable for modelling this kind of pendular motion."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "***Warning***: On some cases, it happens that CGP does not actually find a solution. As there is randomness here, this is not possible. Running multiple episodes can sometimes solve the problem. If you have bad luck, you will even have to train again the solver."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "for i_episode in range(10):\n",
+ " print(f\"Episode #{i_episode}\")\n",
+ " domain = domain_factory()\n",
+ " try:\n",
+ " rollout(domain=domain, solver=solver, max_steps=999, pause_between_steps=None, render=False)\n",
+ " finally:\n",
+ " domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Cleaning up"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "solver._cleanup()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Solve with Classical Planning (IW)\n",
+ "\n",
+ "Iterated Width (IW) is a width based search algorithm that builds a graph on-demand, while pruning non-novel nodes. \n",
+ "\n",
+ "In order to handle continuous domains, a state encoding specific to continuous state variables dynamically and adaptively discretizes the continuous state variables in such a way to build a compact graph based on intervals (rather than a naive grid of discrete point values). \n",
+ "\n",
+ "The novelty measures discards intervals that are included in previously explored intervals, thus favoring to extend the state variable intervals. \n",
+ "\n",
+ "See https://www.ijcai.org/proceedings/2020/578 for more details."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Prepare the domain for IW\n",
+ "\n",
+ "We need to wrap the Gym environment in a domain with finer charateristics so that IW can be used on it. More precisely, it needs the methods inherited from `GymPlanningDomain`, `GymDiscreteActionDomain` and `GymWidthDomain`. In addition, we will need to provide to IW a state features function to dynamically increase state variable intervals. For Gym domains, we use Boundary Extension Encoding (BEE) features as explained in the [paper](https://www.ijcai.org/proceedings/2020/578) mentioned above. This is implemented as `bee2_features()` method in `GymWidthDomain` that our domain class will inherit."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class D(GymPlanningDomain, GymWidthDomain, GymDiscreteActionDomain):\n",
+ " pass\n",
+ "\n",
+ "\n",
+ "class GymDomainForWidthSolvers(D):\n",
+ " def __init__(\n",
+ " self,\n",
+ " gym_env: gym.Env,\n",
+ " set_state: Callable[[gym.Env, D.T_memory[D.T_state]], None] = None,\n",
+ " get_state: Callable[[gym.Env], D.T_memory[D.T_state]] = None,\n",
+ " termination_is_goal: bool = True,\n",
+ " continuous_feature_fidelity: int = 5,\n",
+ " discretization_factor: int = 3,\n",
+ " branching_factor: int = None,\n",
+ " max_depth: int = 1000,\n",
+ " ) -> None:\n",
+ " GymPlanningDomain.__init__(\n",
+ " self,\n",
+ " gym_env=gym_env,\n",
+ " set_state=set_state,\n",
+ " get_state=get_state,\n",
+ " termination_is_goal=termination_is_goal,\n",
+ " max_depth=max_depth,\n",
+ " )\n",
+ " GymDiscreteActionDomain.__init__(\n",
+ " self,\n",
+ " discretization_factor=discretization_factor,\n",
+ " branching_factor=branching_factor,\n",
+ " )\n",
+ " GymWidthDomain.__init__(\n",
+ " self, continuous_feature_fidelity=continuous_feature_fidelity\n",
+ " )\n",
+ " gym_env._max_episode_steps = max_depth\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We redefine accordingly the domain factory."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "domain4width_factory = lambda: GymDomainForWidthSolvers(gym.make(ENV_NAME))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Check compatibility\n",
+ "We check the compatibility of the domain with the chosen solver."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "domain = domain4width_factory()\n",
+ "assert IW.check_domain(domain)\n",
+ "domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Solver instantiation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As explained earlier, we use the Boundary Extension Encoding state features `bee2_features` so that IW can dynamically increase state variable intervals. In other domains, other state features might be more suitable."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "solver = IW(\n",
+ " state_features=lambda d, s: d.bee2_features(s),\n",
+ " node_ordering=lambda a_gscore, a_novelty, a_depth, b_gscore, b_novelty, b_depth: a_novelty\n",
+ " > b_novelty,\n",
+ " parallel=False,\n",
+ " debug_logs=False,\n",
+ " domain_factory=domain4width_factory,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Training solver on domain"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "GymDomainForWidthSolvers.solve_with(solver, domain4width_factory)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Rolling out a solution\n",
+ "\n",
+ "**Disclaimer:** This roll out can be a bit painful to look on local Jupyter sessions. Indeed, IW creates copies of the environment at each step which makes pop up then close a new OpenGL window each time."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We have to slightly modify the roll out function as observations for the new domain are now wrapped in a `GymDomainProxyState` to make them serializable. So to get access to the underlying numpy array, we need to look for `observation._state`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def rollout_iw(\n",
+ " domain: GymDomain,\n",
+ " solver: Solver,\n",
+ " max_steps: int,\n",
+ " pause_between_steps: Optional[float] = 0.01,\n",
+ " render: bool = False,\n",
+ "):\n",
+ " \"\"\"Roll out one episode in a domain according to the policy of a trained solver.\n",
+ "\n",
+ " Args:\n",
+ " domain: the maze domain to solve\n",
+ " solver: a trained solver\n",
+ " max_steps: maximum number of steps allowed to reach the goal\n",
+ " pause_between_steps: time (s) paused between agent movements.\n",
+ " No pause if None.\n",
+ " render: if True, the rollout is rendered in a matplotlib figure as an animation;\n",
+ " if False, speed up a lot the rollout.\n",
+ "\n",
+ " \"\"\"\n",
+ " # Initialize episode\n",
+ " solver.reset()\n",
+ " observation = domain.reset()\n",
+ "\n",
+ " # Initialize image\n",
+ " if render:\n",
+ " plt.ioff()\n",
+ " fig, ax = plt.subplots(1)\n",
+ " ax.axis(\"off\")\n",
+ " plt.ion()\n",
+ " img = ax.imshow(domain.render(mode=\"rgb_array\"))\n",
+ " display(fig)\n",
+ "\n",
+ " # loop until max_steps or goal is reached\n",
+ " for i_step in range(1, max_steps + 1):\n",
+ " if pause_between_steps is not None:\n",
+ " sleep(pause_between_steps)\n",
+ "\n",
+ " # choose action according to solver\n",
+ " action = solver.sample_action(observation)\n",
+ " # get corresponding action\n",
+ " outcome = domain.step(action)\n",
+ " observation = outcome.observation\n",
+ "\n",
+ " # update image\n",
+ " if render:\n",
+ " img.set_data(domain.render(mode=\"rgb_array\"))\n",
+ " fig.canvas.draw()\n",
+ " clear_output(wait=True)\n",
+ " display(fig)\n",
+ "\n",
+ " # final state reached?\n",
+ " if outcome.termination:\n",
+ " break\n",
+ "\n",
+ " # close the figure to avoid jupyter duplicating the last image\n",
+ " if render:\n",
+ " plt.close(fig)\n",
+ "\n",
+ " # goal reached?\n",
+ " is_goal_reached = observation._state[0] >= 0.45\n",
+ " if is_goal_reached:\n",
+ " print(f\"Goal reached in {i_step} steps!\")\n",
+ " else:\n",
+ " print(f\"Goal not reached after {i_step} steps!\")\n",
+ "\n",
+ " return is_goal_reached, i_step"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "domain = domain4width_factory()\n",
+ "try:\n",
+ " rollout_iw(domain=domain, solver=solver, max_steps=999, pause_between_steps=None, render=True)\n",
+ "finally:\n",
+ " domain.close()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "IW works especially well in mountain car. \n",
+ "\n",
+ "Indeed we need to increase the cinetic+potential energy to reach the goal, which comes to increase as much as possible the values of the state variables (position and velocity). This is exactly what IW is designed to do (trying to explore novel states, which means here with higher position or velocity). \n",
+ "\n",
+ "As a consequence, IW can find an optimal strategy in a few seconds (whereas in most cases PPO and CGP can't find optimal strategies in the same computation time)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Cleaning up"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "solver._cleanup()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Conclusion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We saw that it is possible thanks to scikit-decide to apply solvers from different fields and communities (Reinforcement Learning, Genetic Programming, and Planning) on a OpenAI Gym Environment.\n",
+ "\n",
+ "Even though the domain used here is more classical for RL community, the solvers from other communities performed far better. In particular the IW algorithm was able to find an efficient solution in a very short time."
+ ]
+ }
+ ],
+ "metadata": {
+ "anaconda-cloud": {},
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.10"
+ },
+ "toc": {
+ "base_numbering": 1,
+ "nav_menu": {},
+ "number_sections": true,
+ "sideBar": true,
+ "skip_h1_title": true,
+ "title_cell": "Table of Contents",
+ "title_sidebar": "Contents",
+ "toc_cell": false,
+ "toc_position": {
+ "height": "calc(100% - 180px)",
+ "left": "10px",
+ "top": "150px",
+ "width": "187px"
+ },
+ "toc_section_display": true,
+ "toc_window_display": true
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}