Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trino workers autoscaling #12

Closed
adwiza opened this issue Feb 5, 2024 · 8 comments
Closed

Trino workers autoscaling #12

adwiza opened this issue Feb 5, 2024 · 8 comments

Comments

@adwiza
Copy link

adwiza commented Feb 5, 2024

Hi, we need to scale up and down the Trino workers in Trino cluster, can we use this to to solve this issue, or we need to use an alternative approach, could you please cheet to us how to deal with Trino workers autoscaling?

@sbernauer
Copy link
Member

Hi @adwiza, I personally would argue it's the responsibility of the Trino cluster itself to scale the number of workers based on e.g. number of running/queued queries, CPU or some other metric. But I'm also happy about feedback, if you have different opinions.

E.g. at Stackable we will probably utilize Horizontal Pod Autoscaling for that, created stackabletech/trino-operator#532 for that

The community helm-chart of Trino uses the mentioned approach once you enable server.autoscaling.enabled docs

@adwiza
Copy link
Author

adwiza commented Feb 6, 2024

Hi @sbernauer I haven't mentioned, but that we are using the Trino operator and I guess this solution doesn't fit for us, does it?

@sbernauer
Copy link
Member

Sorry, did I get it correctly that you are using https://github.com/stackabletech/trino-operator? In this case you sadly need to wait for stackabletech/trino-operator#532 to have auto-scaling for the workers.

@adwiza
Copy link
Author

adwiza commented Feb 6, 2024

Yes it is, we are using the https://github.com/stackabletech/trino-operator
productVersion: "428"
stackableVersion: "23.11.0"

@soenkeliebau
Copy link
Member

Hi @adwiza
while we do not natively support autoscaling yet, what you could do is just not set the replicas in your clusterdefinitions and deploy your own HorizontalPodAutoscaler.
By removing the "replicas" field you tell our operator to not scale the statefulset and that enables an HorizontalPodAutoscaler to scale it instead.
You would need to find your own metrics to scale on though. The official helm chart could give some inspiration here: https://github.com/trinodb/charts/blob/main/charts/trino/templates/autoscaler.yaml

https://docs.stackable.tech/home/stable/concepts/operations/#_performance

@adwiza
Copy link
Author

adwiza commented Feb 12, 2024

@soenkeliebau it works, thank you, I've added the HPA configuration with needed behavior that's all

@soenkeliebau
Copy link
Member

That is great to hear, thank you for reporting back!

@NickLarsenNZ
Copy link
Member

Closing in favour of stackabletech/trino-operator#532

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants