-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to Start Control Plane with btrfs and arch based Linux, Already tried issue #1416 #2112
Comments
Can you minimize the reproducer? Does this fail with a single node? (Aside: the extraMounts are per node so if necessary they'd need to be on each node) Can you run create cluster with --retain and then upload the results of |
With single node yes, it fail with same error. kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /dev/mapper
containerPath: /dev/mapper Here is the Logs For Single Node. |
|
Thanks for quick response, But the installed version of kind is v0.10.0 already. If it is the case, is it due to some misconfiguration on my system? |
Yes, the version you have should have fixed #2014. It's not clear why you still have a related issue. cgroupsv2 is poorly tested in kubernetes (we do have our own KIND CI on fedora though), and not really prioritized. I'm not sure when I'd have a chance to look at this further, but it won't be today due to the timing (https://www.kubernetes.dev/resources/release/#tldr) |
|
A different issue reminded me here kubernetes/kubernetes#100389 |
This issue ceased to exist for me with upgrading to the latest mainline kernel 5.12.0-rc4-1-mainline. I'm also running Arch Linux on BTRFS and was getting similar errors / control plane didn't start. Cheers! |
thanks @jarhat @this-is-r-gaurav can you try upgrading your kernel? |
@BenTheElder sure, let me try upgrading kernel. and get back to you |
@BenTheElder Yes with |
Thanks, I'd love to know what this was but I don't think I can prioritize digging into this kernel myself currently. |
I guess this is a kernel issue, can re-open later or file a new issue if there's interest. I don't think we're going to have time to play around with the kernels ourselves especially given there's a known solution. |
I'm having the same problem on an extfs and a higher kernel version Kind version
Kubectl versionClient Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Docker InfoClient:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-tp-docker)
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 15
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 36cc874494a56a253cd181a1a685b44b58a2e34a.m
runc version: v1.0.1-0-g4144b638
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
cgroupns
Kernel Version: 5.13.4-1-MANJARO
Operating System: Manjaro Linux
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 31.36GiB
Name: ronin
ID: NJ45:NXDY:GEZP:AXMD:QWLH:RV6P:TLYX:D5SP:HEHD:XWRF:3URU:JMDF
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false I've tried the |
It may be related to older version of kubernetes itself. This is failing w/ |
ERROR: failed to load image: command "docker exec --privileged -i k8s-local-worker2 ctr --namespace=k8s.io images import -" failed with error: exit status 137
|
Ah, So I had to do both add Even when downgrading the kernel back to 5.10, this works. but It means I can't really test multiple versions of kubernetes anymore. |
@esatterwhite please file a new issue, OP was on btrfs and your issue is not clearly linked. Old closed issues are not closely monitored. We can automate adding that mount in the extfs case as well, I've not found anything definitive about /dev/mapper usage with extfs but it seems well known to be used for btrfs and zfs. I would not be surprised if using less common filesystems with Kubernetes causes issues even when the mounter is present. Kubernetes, cAdvisor, CRI (containerd/cri-o), etc. tend to be tested on one or two distros with their defaults, kind can only do so much about this. You might consider using another partition for docker with say ext4. |
@BenTheElder Can do, I think |
I'm facing exactly the same issue on btrfs. Differences are: 1. I'm using a Fedora 34 workstation with kernel version 5.13.6 in a virtual machine. 2. I'm using Podman as backend. |
What happened: Kind Create cluster failed to start control-plane on an arch based linux, with the config mentioned below. I think now the extramounts even not required to be mentioned in the config.
What you expected to happen: Should Create a cluster with 4nodes: 3workers + 1 control plane.
What else you want us to know: Logs of
kind create cluster
:Show Logs
When Run the command failed in above logs manually, i got following logs:
Environment:
kind version
):kind v0.10.0 go1.15.7 linux/amd64
kubectl version
):Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
docker info
):/etc/os-release
):Kernel Version:
5.11.2-128-tkg-bmq
The text was updated successfully, but these errors were encountered: