-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add e2e Benchmarking for CI #1707
Comments
I'd like to take it. So questions about this proposal:
|
Yes, however, we might be able to get away with GH actions and
That's correct. |
I would like to work on this as part of community bridge program if its okay. @bwplotka @jojohappy |
Hi. If is the project still available? I would like to work on this under community bridge mentorship if it is available |
@rajibmitra @kdanW please apply for this as described here: https://docs.linuxfoundation.org/display/DOCS/Mentees cc @GiedriusS |
I have applied , thanks @bwplotka , looking forward to it. |
I have applied as well. Thanks @bwplotka |
Interesting work started by Grafana on this field, it's worth to sync with them: https://docs.google.com/document/d/1_fVDL9EGVjWSdZTYTvE8Ey_g8FlTY_7DVGazuTAWqFc/edit (: |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
https://github.com/prometheus/prombench/blob/master/funcbench/README.md is a useful tool to benchmark specific functions and I believe this is easy to integrate after we move to Github Actions. We can easily reuse the docker images from Prombench to implement a similar workflow.
|
For doing micro-benchmarks between 2 go bench functions, we can reuse funcbench as well. It is not difficult to extend it to achieve this purpose. |
Yes, but I believe this particular issue is for e2e, so not
micro-benchmarks (: Let's keep the discussion focused.
…On Wed, 12 Feb 2020 at 15:31, Ben Ye ***@***.***> wrote:
For doing micro-benchmarks between 2 go bench functions, we can reuse
funcbench as well. It is not difficult to extend it to achieve this purpose.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1707?email_source=notifications&email_token=ABVA3OYJXDPVDM4KURNL7O3RCQI4LA5CNFSM4JH6H7QKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELRGGCA#issuecomment-585261832>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABVA3O4E5H4NEVWDMWFXT4TRCQI4LANCNFSM4JH6H7QA>
.
|
This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions. |
This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions. |
Thanks for sharing the grafana doc @bwplotka :) little surprised that they didn't mention https://github.com/kubernetes/test-infra/tree/master/prow which kind of resembles the proposed solution; but does feel like an overkill for smaller projects to self host and maintain. We're planning for |
Hello 👋 Looks like there was no activity on this issue for last 30 days. |
Hello 👋 Looks like there was no activity on this issue for last 30 days. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
Something like prombench but for Thanos (:
We would love to have some auto benchmark to run on certain PRs to compare versions' resource usage.
I did something like this manually with thanosbench as mentioned here, so:
Dataset generated offline and exactly the same for every test run.
For a start Querier and Store GW tests:
Then give dashboard / Prometheus UI with resource consumption and queries latency.
Why not just run on staging/production environment?
Because this might be not quite deterministic: dataset changes all the time (compactions, new blocks retention), queries are different and not isolated.
Help wanted! (:
The text was updated successfully, but these errors were encountered: