Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to support ramp-up feature #725

Merged
merged 3 commits into from
Jan 15, 2025

Conversation

rishabh6788
Copy link
Collaborator

Description

As mentioned in #713 this PR introduces a new task property, ramp-up-time-period. When provided, opensearch-benchmark will gradually scale up clients based on the time-period and number of clients provided.

Issues Resolved

#713

Testing

  • New functionality includes testing

[Describe how this change was tested]


By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Signed-off-by: Rishabh Singh <sngri@amazon.com>
Copy link
Member

@OVI3D0 OVI3D0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, apart from the linter errors of course. I ran this locally and all the unit tests pass otherwise.

Also, should the documentation be updated to include this new ramp up time period property?

@rishabh6788
Copy link
Collaborator Author

LGTM, apart from the linter errors of course. I ran this locally and all the unit tests pass otherwise.

Also, should the documentation be updated to include this new ramp up time period property?

Thank you. Yes, I'm working on a doc that explains how to use this feature.

@IanHoang IanHoang self-assigned this Jan 14, 2025
Comment on lines +1838 to +1839
return f"TaskAllocation [{self.client_index_in_task}/{self.task.clients}] for {self.task} " \
f"and [{self.global_client_index}/{self.total_clients}] in total"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hoping to understand this better: Each tests has a total number of clients and each task is allocated group of those clients. So self.client_index_in_task tracks which client in the allocated group is performing the task while self.global_client_index tracks which client out of all clients is currently performing this task?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's pretty much how I understood it. So now TaskAllocation knows both the clients position within a task (self.client_index_in_task) as well as the clients position in the overall benchmark (self.global_client_index). Maybe @rishabh6788 can correct me if I'm wrong

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. Consider a case there we have 3 parallel tasks a, b and c, where we have declared total 3 clients at parallel task level and then have 1 client each for task a and b and then 2 clients for task c.
In the task allocation matrix the 3 clients declared at parallel block level will represent global client indices and clients declared at each task level will be used as client index at each task level. It does get a bit confusing, let me know if you have any questions.

Copy link
Collaborator

@IanHoang IanHoang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left one question but functionality looks good to me

Signed-off-by: Rishabh Singh <sngri@amazon.com>
Signed-off-by: Rishabh Singh <sngri@amazon.com>
@rishabh6788 rishabh6788 merged commit 4d96c35 into opensearch-project:main Jan 15, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: ✅ Done
Development

Successfully merging this pull request may close these issues.

3 participants