You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, OSB only provides p50, 90, 99, 99.9, 99.99, and 100 for latency at the end of a workload. But, when adding new features to OpenSearch that might affect performance, we often also want to see other percentiles, like p0, p10, and p25.
To get more granular percentiles you have to publish metrics to a separate datastore and compute percentiles for them later, which adds a lot of complexity. If the user could provide a list of percentiles they'd like to see when running a workload through the CLI, and those percentiles were printed afterwards alongside the existing percentiles, this use case would be simpler.
Describe the solution you'd like
We could add some command-line flag, like --latency-percentiles, with a comma-separated list of additional percentiles the user wants. At the end of the run, OSB would display these percentiles as well as the default ones.
Describe alternatives you've considered
In #199, a more general solution was proposed, where users could define their own metrics. This solution would be simpler, but only covers the case where people want to see more percentiles for existing metrics.
We could also just add p0, p10, and p25 by default, but these might not be useful for everyone, and some users might want different percentiles than these.
Additional context
This would be a simpler sub-issue of #199, which is not currently being worked on.
It's also similar to #261, which requests more percentiles for throughput metrics. I don't think this is currently being worked on.
The text was updated successfully, but these errors were encountered:
These can be very useful in case of measuring cache performances where lower latencies can show greater improvements if some new cache acceleration features have been introduced.
Thanks for bringing attention to this @peteralfonsi. This will definitely be helpful. It might be better to have --latency-percentiles just override the default percentiles instead of appending them to the default ones. For examples, if users provide --latency-percentiles=0,10,25,50, the default will be overridden and only percentiles p0, p10, p25, and p50 will show instead of p0, p10, p25, p50, p90, p99, p99.9, p99.99, and p100. Let me know your thoughts.
Feel free to cut an implementation for this! Once this is addressed, we could use the same approach for throughput metric in #261.
Is your feature request related to a problem? Please describe.
Currently, OSB only provides p50, 90, 99, 99.9, 99.99, and 100 for latency at the end of a workload. But, when adding new features to OpenSearch that might affect performance, we often also want to see other percentiles, like p0, p10, and p25.
To get more granular percentiles you have to publish metrics to a separate datastore and compute percentiles for them later, which adds a lot of complexity. If the user could provide a list of percentiles they'd like to see when running a workload through the CLI, and those percentiles were printed afterwards alongside the existing percentiles, this use case would be simpler.
Describe the solution you'd like
We could add some command-line flag, like
--latency-percentiles
, with a comma-separated list of additional percentiles the user wants. At the end of the run, OSB would display these percentiles as well as the default ones.Describe alternatives you've considered
In #199, a more general solution was proposed, where users could define their own metrics. This solution would be simpler, but only covers the case where people want to see more percentiles for existing metrics.
We could also just add p0, p10, and p25 by default, but these might not be useful for everyone, and some users might want different percentiles than these.
Additional context
This would be a simpler sub-issue of #199, which is not currently being worked on.
It's also similar to #261, which requests more percentiles for throughput metrics. I don't think this is currently being worked on.
The text was updated successfully, but these errors were encountered: