A collection of scenarios and efficient benchmarks for the ViZDoom RL environment. For further details refer to the paper
This repository includes:
- Source code for generation of custom scenarios for the ViZDoom simulator
- Source code for training new agents with the GPU-batched A2C algorithm
- Detailed instructions of how to evaluate pretrained agents and train new ones
- Example videos of rollouts of the agent.
- Ubuntu 16.04+ (there is no reason this will not work on Mac or Windows but I have not tested)
- python 3.5+
- PyTorch 0.4.0+
- ViZDoom 1.1.4 (if evaluating a pretrained model, otherwise the latest vision should be fine)
- ViZDoom has many dependencies which are described on their site, make sure to install the ZDoom dependencies.
- Clone this repo
- Assuming you are using a venv, activate it and install packages listed in requirements.txt
- Test the installation with the following command, this should train an agent 100,000 frames in the basic health gathering scenario:
python 3dcdrl/train_agent.py --num_frames 100000
Note if you want to train this agent to convergence it takes between 5-10M frames.
As detailed in the paper, there are a number of scenarios. We include a script generate_scenarios.sh in the repo that will generate the following scenarios:
- Labyrinth: Sizes 5, 7, 9, 11, 13
- Find return: Size 5, 7, 9, 11, 13
- K-Item : 2, 4, 6, 8 items
- Two color correlation: 10%, 30%, 50% and 70% walls retained.
This takes around 10 minutes so grab a coffee. If you wish to only generate for one scenario, take a look at the script it should be clear what you need to change.
We include pretrained models in the repo that you can test out, or you can train your own agents from scratch. The evaluation code will output example rollouts for all 64 test scenarios.
Evaluation:
SIZE=9
python 3dcdrl/create_rollout_videos.py --limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/labyrinth/$SIZE/test/ \
--scenario custom_scenario{:003}.cfg --model_checkpoint \
3dcdrl/saved_models/labyrinth_$SIZE\_checkpoint_0198658048.pth.tar \
--multimaze --num_mazes_test 64
Training:
SIZE=9
python 3dcdrl/train_agent.py --scenario custom_scenario{:003}.cfg \
--limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/labyrinth/$SIZE/train/ \
--test_scenario_dir 3dcdrl/scenarios/custom_scenarios/labyrinth/$SIZE/test/ \
--multimaze --num_mazes_train 256 --num_mazes_test 64 --fixed_scenario
Evaluation:
SIZE=9
python 3dcdrl/create_rollout_videos.py --limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/find_return/$SIZE/test/ \
--scenario custom_scenario{:003}.cfg --model_checkpoint \
3dcdrl/saved_models/find_return_$SIZE\_checkpoint_0198658048.pth.tar \
--multimaze --num_mazes_test 64
Training:
SIZE=9
python 3dcdrl/train_agent.py --scenario custom_scenario{:003}.cfg \
--limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/find_return/$SIZE/train/ \
--test_scenario_dir 3dcdrl/scenarios/custom_scenarios/find_return/$SIZE/test/ \
--multimaze --num_mazes_train 256 --num_mazes_test 64 --fixed_scenario
Evaluation:
NUM_ITEMS=4
python 3dcdrl/create_rollout_videos.py --limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/kitem/$NUM_ITEM/test/ \
--scenario custom_scenario{:003}.cfg --model_checkpoint \
3dcdrl/saved_models/$NUM_ITEMS\item_checkpoint_0198658048.pth.tar \
--multimaze --num_mazes_test 64
Training:
NUM_ITEMS=4
python 3dcdrl/train_agent.py --scenario custom_scenario{:003}.cfg \
--limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/kitem/$NUM_ITEMS/train/ \
--test_scenario_dir 3dcdrl/scenarios/custom_scenarios/kitem/$NUM_ITEMS/test/ \
--multimaze --num_mazes_train 256 --num_mazes_test 64 --fixed_scenario
Evaluation:
DIFFICULTY=3
python 3dcdrl/create_rollout_videos.py --limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/two_color/$DIFFICULTY/$DIFFICULTY/test/ \
--scenario custom_scenario{:003}.cfg --model_checkpoint \
3dcdrl/saved_models/two_col_p$DIFFICULTY\_checkpoint_0198658048.pth.tar \
--multimaze --num_mazes_test 64
Training:
DIFFICULTY=3
python 3dcdrl/train_agent.py --scenario custom_scenario{:003}.cfg \
--limit_actions \
--scenario_dir 3dcdrl/scenarios/custom_scenarios/two_color/$DIFFICULTY/train/ \
--test_scenario_dir 3dcdrl/scenarios/custom_scenarios/two_color/$DIFFICULTY/test/ \
--multimaze --num_mazes_train 256 --num_mazes_test 64 --fixed_scenario
In the paper we report a frames per second in terms of envionrment interactions, the agents are trained with a frame skip of 4, which means for each observation the same action is repeated 4 times.
Yes, we have made a tradeoff between increased memory usage in order to increase performance, you can reduce the memory footprint by excluding --fixed scenario from the command line arguments. You will see a 10% drop in efficiency.
If you find this useful, consider citing the following:
@inproceedings{beeching2020baselines,
title={Deep Reinforcement Learning on a Budget: 3D Control and Reasoning Without a
Supercomputer.
},
author={Beeching, Edward and Dibangoye, Jilles and
Simonin, Olivier and Wolf, Christian}
booktitle={International Conference on Pattern Recognition},
year={2020}}