Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Add PPO multi-agent StatelessCartPole learning tests. #47196

Merged

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Aug 19, 2024

Add PPO multi-agent StatelessCartPole learning tests to CI.

  • single (CPU) Learner
  • 2 CPUs
  • single GPU Learner
  • 2 GPUs

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

sven1977 added 9 commits June 28, 2024 16:00
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 enabled auto-merge (squash) August 19, 2024 12:57
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Aug 19, 2024
Copy link
Collaborator

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Would be nice to have some comments and a refactoring into helper functions.

num_iters,
):
# Count total number of timesteps per module ID.
if isinstance(episodes[0], MultiAgentEpisode):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't it possible to use generator/iterator that can run empty and if so the learner returns?

),
lookback=self.observations.lookback,
space=self.observation_space,
_lb = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we leave a comment what we are calculating here and when this case can occur, please?

space=self.action_space,
)

_lb = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also could we refactor into a helper function?

)
.environment("multi_stateless_cart")
.env_runners(
env_to_module_connector=lambda env: MeanStdFilter(multi_agent=True),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember, we have still this open question, if we need to add this connector also to learner or not. I think we do not need it: MeanStd rewrites observations and learner has these observations then, correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, this connector (and most other env-to-module ones) directly writes back into the episode, thus making the change to the observation permanent. Hence no need to also add it to the Learner pipeline as the Learner pipeline then operates on the already changed episodes/observations.

Signed-off-by: sven1977 <svenmika1977@gmail.com>
@github-actions github-actions bot disabled auto-merge August 20, 2024 11:06
…ppo_multi_agent_stateless_cartpole

Signed-off-by: sven1977 <svenmika1977@gmail.com>

# Conflicts:
#	rllib/BUILD
#	rllib/core/learner/learner_group.py
#	rllib/env/single_agent_episode.py
#	rllib/tuned_examples/ppo/multi_agent_pendulum_ppo.py
#	rllib/utils/minibatch_utils.py
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
Signed-off-by: sven1977 <svenmika1977@gmail.com>
@sven1977 sven1977 merged commit c8baeb2 into ray-project:master Aug 23, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants