Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubernetes with kind example #596

Closed
wants to merge 1 commit into from

Conversation

afbjorklund
Copy link
Member

@afbjorklund afbjorklund commented Jan 26, 2022

kind (kubernetes in docker)

https://kind.sigs.k8s.io/

Sets up an instance with docker, kind and kubectl.

Then uses "kind create cluster" to start a container.

Requested by some people on the Kubernetes slack.

Mostly based on docker.yaml and the upstream docs.

Sets up an instance with docker, kind and kubectl.

Then uses "kind create cluster" to start a container.

Signed-off-by: Anders F Björklund <anders.f.bjorklund@gmail.com>
@afbjorklund
Copy link
Member Author

afbjorklund commented Jan 26, 2022

Maybe not obvious, but there is containerd in there as well:

$ kubectl get nodes -o wide
NAME                 STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane,master   20h   v1.23.1   172.18.0.2    <none>        Ubuntu 21.10   5.13.0-27-generic   containerd://1.5.9
$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED        STATUS          PORTS                       NAMES
69ec00dd23b4   kindest/node:v1.23.1   "/usr/local/bin/entr…"   20 hours ago   Up 10 minutes   127.0.0.1:40169->6443/tcp   kind-control-plane

So the actual Kubernetes container runtime is "containerd".

https://kubernetes.io/docs/setup/production-environment/container-runtimes/

@afbjorklund
Copy link
Member Author

afbjorklund commented Jan 26, 2022

It can take a while to pull down the "node" image (583M .gz), so I didn't add a probe for it - the cluster will be up when ready.

REPOSITORY     TAG                  IMAGE ID       CREATED       SIZE
kindest/node   v1.23.1              49b8c1a84228   2 weeks ago   1.46GB
kindest/base   v20220106-a11d619c   a93fabfa5777   2 weeks ago   283MB
├─<missing> Virtual Size: 77.4 MB
│ └─64c59b1065b1 Virtual Size: 77.4 MB Tags: ubuntu:impish
└─<missing> Virtual Size: 282.8 MB
  └─<missing> Virtual Size: 282.8 MB
    └─<missing> Virtual Size: 282.8 MB
      └─<missing> Virtual Size: 282.8 MB Tags: kindest/base:v20220106-a11d619c
        └─49b8c1a84228 Virtual Size: 1.5 GB Tags: kindest/node:v1.23.1

(the "base" image is squashed, so it doesn't share any layers with ubuntu:21.10 anymore)

IMAGE          CREATED       CREATED BY                                      SIZE      COMMENT
49b8c1a84228   2 weeks ago   infinity                                        1.18GB    
<missing>      2 weeks ago   ENTRYPOINT ["/usr/local/bin/entrypoint" "/sb…   0B        buildkit.dockerfile.v0
<missing>      2 weeks ago   STOPSIGNAL SIGRTMIN+3                           0B        buildkit.dockerfile.v0
<missing>      2 weeks ago   ENV container=docker                            0B        buildkit.dockerfile.v0
<missing>      2 weeks ago   COPY / / # buildkit                             283MB     buildkit.dockerfile.v0

All the kubernetes images are preloaded, which accounts for the big size of the "node" image.

anders@lima-kind:~$ docker exec kind-control-plane crictl images
IMAGE                                      TAG                  IMAGE ID            SIZE
docker.io/kindest/kindnetd                 v20211122-a2c10462   ba113d2047d43       40.9MB
docker.io/rancher/local-path-provisioner   v0.0.14              e422121c9c5f9       13.4MB
k8s.gcr.io/build-image/debian-base         buster-v1.7.2        19bad6b08adae       21.1MB
k8s.gcr.io/coredns/coredns                 v1.8.6               a4ca41631cc7a       13.6MB
k8s.gcr.io/etcd                            3.5.1-0              25f8c7f3da61c       98.9MB
k8s.gcr.io/kube-apiserver                  v1.23.1              44863df28d14d       76.7MB
k8s.gcr.io/kube-controller-manager         v1.23.1              31df4293ae53c       65.2MB
k8s.gcr.io/kube-proxy                      v1.23.1              0f8e0fd173bca       113MB
k8s.gcr.io/kube-scheduler                  v1.23.1              d94ee18a8a4ed       51.9MB
k8s.gcr.io/pause                           3.6                  6270bb605e12e       302kB

@AkihiroSuda
Copy link
Member

I'm not sure we want this.
We already have k8s.yaml and k3s.yaml to run Kubernetes.
If users really need to use kind, they can install kind into docker.yaml or podman.yaml manually.

@jandubois
Copy link
Member

I'm not sure we want this.

I agree. I expected this to work:

$ brew install kind
$ limactl start examples/docker.yaml
$ kind create cluster
ERROR: failed to create cluster: running kind with rootless provider requires cgroup v2, see https://kind.sigs.k8s.io/docs/user/rootless/

So shouldn't we fix that instead of creating another instance?

@afbjorklund
Copy link
Member Author

afbjorklund commented Jan 27, 2022

I'm not sure we want this.

I think it was more that they wanted it, since people wanted to avoid running Docker Desktop but still run kind...

The alternative (for kind) would be Podman Desktop, but it hasn't been released yet (expecting 4.0 during Feb)

But people can run kubeadm (k8s.yaml).

So shouldn't we fix that instead of creating another instance?

I think it is work ongoing (with "rootless"), which is documented at https://kind.sigs.k8s.io/docs/user/rootless/

I'm not sure what kind of modifications are required to the lima instances, in order to use it ? Nor very interested.

I will continue with minikube instead.


There is also a small project to support nerdctl as a provider, that could also be an option for lima (when completed).

If all you want is to run Kubernetes (not kind), then there is already both Docker Desktop and Rancher Desktop (k3s) today.

Eventually there will also be a new minikube with M1 support too, but at the moment the driver and iso are struggling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants