In this project, you will work with the Tennis environment.
The goal of this environment is to bounce the ball back and forth between the two agents without letting the ball hit the ground. A reward of +0.1 is provided for the agent who hits the ball over the net. On the other hand, A reward of -0.01 is provided for the agent whose ball hits the ground or out of bounds.
The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own local observation. Two continuous actions in the range of [-1,1] are available, corresponding to the velocity toward (or away from) the net and jumping.
The task is episodic and in order to solve the environment your agents must get an average score of +0.5 (over 100 consecutive episodes, after taking the maximum over both agents).
- Python 3.6
- PyTorch 0.4.0
- ML-Agents Beta v0.4
NOTE : (For Windows users) The ML-Agents toolkit supports Windows 10. While it might be possible to run the ML-Agents toolkit using other versions of Windows, it has not been tested on other versions. Furthermore, the ML-Agents toolkit has not been tested on a Windows VM such as Bootcamp or Parallels.
-
Create (and activate) a new environment with Python 3.6 via Anaconda.
- Linux or Mac:
conda create --name your_env_name python=3.6 source activate your_env_name
- Windows:
conda create --name your_env_name python=3.6 activate your_env_name
-
Clone the repository, and navigate to the python/ folder. Then, install several dependencies (see
requirements.txt
).git clone https://github.com/4kasha/Multi_Agent_DDPG.git cd Multi_Agent_DDPG/python pip install .
-
Download the environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the "headless" version of the environment.
NOTE : For this project, you will not need to install Unity. The link above provides you a standalone version. Also the above Tennis environment is similar to, but not identical to the original one on the Unity ML-Agents GitHub page.
-
Place the file in this repository Multi_Agent_DDPG and unzip (or decompress) the file.
- Before running code, edit parameters in
train.py
, especially you must changeenv_file_name
according to your environment. - Run the following command to get started with training your own agents!
python train.py
- After finishing training weights and scores are saved in the following folder
weights
andscores
respectively.
- For more details of algolithm description, hyperparameters settings and results, see REPORT.md.
- For the examples of training results, see MARL_Results.ipynb.
- After training you can test the agent with saved weights in the folder
weights
, see MARL_Watch_Agent.ipynb. - This project is a part of Udacity's Deep Reinforcement Nanodegree program.