Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run "Kubernetes Conformance Test" across all providers #833

Open
surajssd opened this issue Apr 17, 2023 · 5 comments
Open

Run "Kubernetes Conformance Test" across all providers #833

surajssd opened this issue Apr 17, 2023 · 5 comments

Comments

@surajssd
Copy link
Member

We should run conformance tests with each release and post the results for each provider. This helps us identify gaps and work towards closing them.

More information on conformance test: https://github.com/cncf/k8s-conformance


I think the first task is to identify what parts of the conformance test suite we should run? Then once identified we should create a script that can be run pointing to each provider.

@surajssd
Copy link
Member Author

Here is how to run it for a k8s cluster with CAA installed on it:

git clone https://github.com/kata-containers/tests
cd tests/

export RUNTIME_CLASS=kata-remote
kubectl create cm kata-webhook --from-literal runtime_class=$RUNTIME_CLASS

export KUBECONFIG=$HOME/.kube/config
export WAIT_TIME="10080"

./integration/kubernetes/e2e_conformance/run.sh

@surajssd
Copy link
Member Author

In my first run I had following failures:

  • [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
  • [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
  • [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
  • [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  • [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  • [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  • [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]

@surajssd
Copy link
Member Author

surajssd commented May 30, 2023

So in the second run I skipped the above and ran it again and now following more tests failed:

  • [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  • [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
  • [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  • [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
  • [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  • [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]

Here is the summary of the above run: https://gist.github.com/surajssd/b8263d82086ca18cb0b6dc9a416f678a


Steps to recreate the above run:

export RUNTIME_CLASS=kata-remote
kubectl create cm kata-webhook --from-literal runtime_class=$RUNTIME_CLASS

export KUBECONFIG=$HOME/.kube/config
export WAIT_TIME="10080"
export E2E_PARALLEL=true

./integration/kubernetes/e2e_conformance/run.sh

@surajssd
Copy link
Member Author

Each time I am seeing different results! Not sure what is wrong, I think dissecting each failing test and looking closely as to what should be ignored should be considered for peer pods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant