Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Geonode - PersistentVolumeClaims. #56

Open
sgavathe opened this issue Apr 12, 2022 · 8 comments
Open

Geonode - PersistentVolumeClaims. #56

sgavathe opened this issue Apr 12, 2022 · 8 comments

Comments

@sgavathe
Copy link

sgavathe commented Apr 12, 2022

Hello,

Unfortunately the genode pod never really starts with the PersistenVolumeClaims.

Warning FailedScheduling 6m21s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 4m4s (x1 over 5m4s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

pod status is Pending
my-release-rabbitmq-0 0/1 Pending 0 12m
my-release-geonode-d8bbd8b5d-wb6kc 0/4 Pending 0 12m
my-release-postgresql-0 0/1 Pending 0 12m

@Yann-J
Copy link
Contributor

Yann-J commented Apr 23, 2022

Hello! Sorry for the delay here.
Looks like not just the geonode pod is pending but even the dependencies (postgres and rabbitmq).
I've found that typically pods stuck pending volume claims are often due to the need to specify a storage class. Depending on your cluster, you may not have a working default class.
This can be done, for geonode and its dependencies, by specifying a global.storageClass value.

To know more, you probably need to inspect your volumes to check why they're not spinning up...

@Yann-J
Copy link
Contributor

Yann-J commented Apr 23, 2022

Oh actually I realize that we are setting one up by default (called standard, which seems to work out of the box with e.g. minikube and kind, but may not work everywhere...).
The ideal behavior would be to not provide a default at all...
For now specifying the right one is probably a good enough workaround...

@sgavathe
Copy link
Author

sgavathe commented Apr 25, 2022

Thank you @Yann-J . Sorry about the post under github/issues.

Pods seemed to be running after I fixed the storageClass.

kubectl get sc
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  38h
Type    Reason     Age    From               Message
 ----    ------     ----   ----               -------
 Normal  Scheduled  7m3s   default-scheduler  Successfully assigned default/geonode-geonode-769d5cc9b8-mmqb2 to k3d-k3d-cluster-1-server-0
 Normal  Pulled     7m2s   kubelet            Container image "geonode/geoserver_data:2.20.4" already present on machine
 Normal  Created    7m2s   kubelet            Created container data-dir-conf
 Normal  Started    7m2s   kubelet            Started container data-dir-conf
 Normal  Pulled     7m2s   kubelet            Container image "jwilder/dockerize" already present on machine
 Normal  Created    7m2s   kubelet            Created container wait-db
 Normal  Started    7m1s   kubelet            Started container wait-db
 Normal  Pulled     6m36s  kubelet            Container image "geonode/geonode:3.1" already present on machine
 Normal  Created    6m36s  kubelet            Created container geonode
 Normal  Started    6m35s  kubelet            Started container geonode
 Normal  Pulled     6m35s  kubelet            Container image "geonode/geonode:3.1" already present on machine
 Normal  Created    6m35s  kubelet            Created container celery
 Normal  Started    6m35s  kubelet            Started container celery
 Normal  Pulled     6m35s  kubelet            Container image "nginx:1.19" already present on machine
 Normal  Created    6m35s  kubelet            Created container nginx
 Normal  Started    6m35s  kubelet            Started container nginx
 Normal  Pulled     6m35s  kubelet            Container image "geonode/geoserver:2.20.4" already present on machine
 Normal  Created    6m35s  kubelet            Created container geoserver
 Normal  Started    6m34s  kubelet            Started container geoserver

geonode-postgresql-0                                1/1     Running   0             9m12s
geonode-rabbitmq-0                                  1/1     Running   0             9m12s
geonode-geonode-769d5cc9b8-mmqb2                    4/4     Running   0             9m12s

And pv/pvc are claimed,

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS   REASON   AGE     
task-pv-volume                             10Gi       RWO            Retain           Bound    default/task-pv-claim               manual                  17h     
pvc-7f0bfbec-16b8-424e-b2e6-c8435794a439   8Gi        RWO            Delete           Bound    default/data-geonode-rabbitmq-0     local-path              16h     
pvc-f2089a81-66bf-49ad-8a4a-b14c87d99d60   8Gi        RWO            Delete           Bound    default/data-geonode-postgresql-0   local-path              16h     
pvc-21771d6f-5f31-47e2-927c-dc5a5fd70971   10Gi       RWO            Delete           Bound    default/geonode-geonode             local-path              71m     

$ kubectl get pvc
NAME                           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim                  Bound     task-pv-volume                             10Gi       RWO            manual         17h
data-geonode-rabbitmq-0        Bound     pvc-7f0bfbec-16b8-424e-b2e6-c8435794a439   8Gi        RWO            local-path     16h
data-geonode-postgresql-0      Bound     pvc-f2089a81-66bf-49ad-8a4a-b14c87d99d60   8Gi        RWO            local-path     16h
geonode-geonode                Bound     pvc-21771d6f-5f31-47e2-927c-dc5a5fd70971   10Gi       RWO            local-path     71m
my-release-geonode             Pending                                                                        standard       60m
data-my-release-rabbitmq-0     Pending                                                                        standard       60m
data-my-release-postgresql-0   Pending                                                                        standard       60m

pod describe gives me below

kubectl describe pod geonode-geonode-f7948dfb8-6xc2z

Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From     Message
  ----     ------     ----                 ----     -------
  Warning  Unhealthy  2m6s (x56 over 57m)  kubelet  Liveness probe failed: dial tcp 10.42.0.63:8000: connect: connection refused

But under http://localhost:8080/ I get nginx 502 Bad Gateway

Thanks

@sgavathe
Copy link
Author

sgavathe commented Apr 28, 2022

Thanks. Finally got to run it but page isn't fully loaded after redirect.

image

image

@Yann-J
Copy link
Contributor

Yann-J commented May 3, 2022

Hello @sgavathe I cannot really be of much assistance without any details (logs, config) to help me understand.
What I can however mention is that the Geonode container includes a build phase at startup, and assets are therefore taking some time before being available. The logs will tell you the progress of this step.

Not in my control unfortunately...

@mthienpont
Copy link

Hi @sgavathe, do you remember how you got past the 502 Bad Gateway?

@sgavathe
Copy link
Author

sgavathe commented Jul 22, 2023

hello @mthienpont , I haven't really got Geonode as part of our system. We are only using Geoserver (Vanilla ) that is provided by main geoserver/docker. All the other helm charts proved to have something missing in them. But Since I posted this, there may be some improvements. Try their latest release.

@ridoo
Copy link

ridoo commented Nov 29, 2023

@mthienpont if you are interested still in deploying geonode on K8S you may want to look at geonode-k8s Helm Chart 1.

@Yann-J For what reasons GeoNode Chart got archived/removed? I would be truly interested in what led you to that decision.

Footnotes

  1. https://github.com/zalf-rdm/geonode-k8s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants