-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[documentation] Document deployment on existing AWS EKS cluster #942
Comments
@viniciusdc would you mind taking a look at this to see if I missed anything? And could you share any |
Now that I think of it, this is most likely caused by the fact that this existing web app already has an |
Hi @iameskild, the only qhub-config that I have is for a GCP deployment. The only difference from yours (besides the provider) it that we needed to set the load-balancer configuration to an internal one, but that's because some security policies |
Hey @viniciusdc, how did you provision the DNS? From looking reading through the code base, it appears that when deploying to a And that what I see when I deploy:
This explains why I can't access the cluster. |
I was able to get around this by updating the DNS record manually in the CloudFlare portal 👍 |
You can work around that, providing the DNS records manually in the namespace right? by providing the certificate's secrets... (I am not sure) |
I noticed that a few minutes after posting this 😆 Thanks @viniciusdc In the future, it might be nice if users with existing clusters can have their DNS recorded auto provisioned as well. Some changes to this part of the code could include a check for which cloud provider they are using and |
Related to #935.
To test and document how to deploy to an existing ("local") EKS cluster, I ran through the following steps:
Use (create) base EKS cluster
To get a functioning EKS cluster up and running quickly, I created a cluster and web app based on this tutorial. This cluster is running in it's own VPC with 3 subnets (each in it's own AZ) and there are no node groups. A scenario like seemed like a good place to start from the perspective of an incoming user.
Once this EKS cluster is up, there are still a handful of steps that seem to be required before QHub can be deployed to it:
general
,user
andworker
node groupsNode IAM Role
with specific permissions (copied from existing role from previous qhub deployment):I'm sure there are scenarios where there already exists node groups and they can be repurposed but more broadly it would be nice to make this process a lot more streamlined. Did I overcomplicate this, or are there other ways of handling the QHub deployment without having to add these node groups explicitly?
Deploy QHub to Existing EKS Cluster
Ensure that you are using the existing cluster's
kubectl
context.Initialize in the usual manner:
Then update the
qhub-config.yaml
file. The important keys to update are:provider: aws
withprovider: local
amazon_web_services
withlocal
node_selector
andkube_context
appropriatelyOnce updated, deploy in the usual manner:
The deployment completes successfully and all the pods appear to be running (alongside the existing pods from the web app). The issue is that I can't access the cluster from the browser:
When examining the print statement from the deployment more, you can see that the cluster doesn't have an IP address:
qhub-config.yaml
The text was updated successfully, but these errors were encountered: