[K8s operator] Wait for available replica before watching services #523
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it:
When you deploy the k8s operator using the installer tool (or manually), right now there is a race condition where the UI is being provisioned and the service watcher started. It can happen (if the cluster needs to pull the UI image for example), that the UI endpoint is not available yet and the service watcher pushes discovered services with failure result (service is ready but the pod is being provisioned). The user can re-label services and these will be automatically added but this is non desired extra-work.
With this PR, we wait for an available replica (5 secs x 10 retries max) and once the replica is up and running we start the service watcher.
The channel event is now run inside a task, so all the incoming events can run in parallel and be provisioned without having to wait for the previous deployment, that might be waiting for an available replica.
Which issue(s) this PR fixes: #521