-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docstrings for new stream class, update sphinx docs #44
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #44 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 11 11
Lines 551 551
=========================================
Hits 551 551
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
2bf0e2a
to
9c549a5
Compare
9c549a5
to
71598d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Just some typos and the possibility of mentioning that resources are overcommitted.
for execution, until yeilded by :meth:`run`, where they are de-serialized on | ||
the driver. | ||
|
||
This executor will attempt to pack the cluster, irrespective of any other |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should somewhere mention that we not just packing but overcommitting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that is necessary, it utilizes the ray scheduler to make sure tasks are run. The mention above is just to say that it does not consider any other workloads.
already been initialized | ||
|
||
Args: | ||
ray_poll_timeout: After scheduling as many tasks as can fit, ray.wait on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should somewhere mention that we not just packing but overcommitting?
Like maybe (also) here 🤷 ?
Co-authored-by: Jamie Noss <jnoss@jhu.edu>
Co-authored-by: Jamie Noss <jnoss@jhu.edu>
Co-authored-by: Jamie Noss <jnoss@jhu.edu>
Co-authored-by: Jamie Noss <jnoss@jhu.edu>
Co-authored-by: Jamie Noss <jnoss@jhu.edu>
Co-authored-by: Jamie Noss <jnoss@jhu.edu>
@jamienoss , thanks, I probably should have run a spell-checker - and I had no idea I had such a tendency to type yeilded 😆 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Though, I find it important info to know that a scheduler will overcommit such that I have a better handle on resource expectations. Maybe I should just always assume that everything overcommits.
Thanks. And to be clear, it doesn't overcommit in the sense that more tasks than resources run expecting that some will not use what they request. Ray will ensure that the resources requested are guaranteed for the task (e.g. if you have 8 cores and tasks using 5 and 4, both will not run at the same time) |
Ok, good to know - I guess I misunderstood a comment in #43 but never mind. Maybe I'm just misremembering it lol. |
Adding some docstrings for new classes and methods. Also subbing out some pages for the manual.