Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: add podvm-smoketest workflow #2247

Merged
merged 1 commit into from
Jan 17, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 160 additions & 0 deletions .github/workflows/podvm_smoketest.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
name: smoke test

on:
pull_request:

jobs:
podvm-mkosi:
# We're pinning the runner to 22.04 b/c libvirt struggles with the
# OVMF_CODE_4M firmware that is default on 24.04.
runs-on: 'ubuntu-22.04'
mkulke marked this conversation as resolved.
Show resolved Hide resolved

defaults:
run:
working-directory: src/cloud-api-adaptor/podvm-mkosi

steps:
- uses: actions/checkout@v4

# Required by rootless mkosi on Ubuntu 24.04
# - name: Un-restrict user namespaces
# run: sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0

Comment on lines +19 to +22
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just a reference for developers trying to do this manually, or potentially for future when we can migrate to 24.04 after the OVMF firmware issues are resolved?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the latter, because we'd probably forget about this

- name: Install build dependencies
run: |
sudo apt-get update
sudo apt-get install -y \
genisoimage \
qemu-utils \
socat \
virt-manager
sudo snap install yq
- name: Read properties from versions.yaml
working-directory: src/cloud-api-adaptor
run: |
{
echo "MKOSI_VERSION=$(yq -e '.tools.mkosi' versions.yaml)";
echo "ORAS_VERSION=$(yq -e '.tools.oras' versions.yaml)";
echo "KATA_REF=$(yq -e '.oci.kata-containers.reference' versions.yaml)";
echo "KATA_REG=$(yq -e '.oci.kata-containers.registry' versions.yaml)";
} >> "$GITHUB_ENV"
- uses: oras-project/setup-oras@v1
with:
version: ${{ env.ORAS_VERSION }}

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Build binaries
run: make binaries

- name: Disable TLS for agent-protocol-forwarder
run: |
mkdir -p ./resources/binaries-tree/etc/default
echo "TLS_OPTIONS=-disable-tls" > ./resources/binaries-tree/etc/default/agent-protocol-forwarder
Comment on lines +53 to +56
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think this is a candidate to also enable in the "developer mode" I'm thinking of in #2227
I'm trying to work out if there would be value in reusing the current podvm_mkosi workflow to help do this image build section, but we'd need to add support for that in. This isn't a blocker for this PR, just thinking about it whilst reviewing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, although I'm not convinced we require a build-time flag for TLS. An empty tls config in daemon.json should just indicate that we're not using TLS.

- name: Build image
run: make image

- name: Install kata-agent-ctl
run: |
oras pull "${KATA_REG}/agent-ctl:${KATA_REF}-x86_64"
tar xf kata-static-agent-ctl.tar.xz
cp opt/kata/bin/kata-agent-ctl /usr/local/bin
# TODO: generate the cloud-init iso from code
- name: Create cloud-init iso
run: |
mkdir cloud-init
touch cloud-init/meta-data
cat <<EOF > cloud-init/user-data
#cloud-config
write_files:
- path: /run/peerpod/daemon.json
content: |
{
"pod-network": {
"podip": "10.244.1.21/24",
"pod-hw-addr": "32:b9:59:6b:f0:d5",
"interface": "eth0",
"worker-node-ip": "10.224.0.5/16",
"tunnel-type": "vxlan",
"routes": [
{
"dst": "0.0.0.0/0",
"gw": "10.244.1.1",
"dev": "eth0",
"protocol": "boot"
},
{
"dst": "10.244.1.0/24",
"gw": "",
"dev": "eth0",
"protocol": "kernel",
"scope": "link"
}
],
"neighbors": null,
"mtu": 1500,
"index": 2,
"vxlan-port": 8472,
"vxlan-id": 555002,
"dedicated": false
},
"pod-namespace": "default",
"pod-name": "smoketest"
}
EOF
genisoimage -output cloud-init.iso -volid cidata -joliet -rock cloud-init/user-data cloud-init/meta-data
- name: Setup Libvirt
run: |
sudo mkdir /tmp/libvirt
sudo mv cloud-init.iso /tmp/libvirt
sudo mv build/podvm-fedora-amd64.qcow2 /tmp/libvirt
sudo chown -R libvirt-qemu /tmp/libvirt
sudo chmod +x /tmp
- name: Launch PodVM
run: |
CI_IMAGE="/tmp/libvirt/cloud-init.iso"
OS_IMAGE="/tmp/libvirt/podvm-fedora-amd64.qcow2"
OVMF="/usr/share/OVMF/OVMF_CODE.fd"
sudo virt-install \
--name smoketest \
--ram 1024 \
--vcpus 2 \
--disk "path=${OS_IMAGE},format=qcow2" \
--disk "path=${CI_IMAGE},device=cdrom" \
--import \
--network network=default \
--os-variant detect=on \
--graphics none \
--virt-type=kvm \
--boot loader="$OVMF"
- name: Wait for VM to claim IP address
run: |
for n in 30 10 10 10 10 10 0; do
if [ $n -eq 0 ]; then
echo "PodVM did not claim an IP address in time"
exit 1
fi
echo "sleeping for ${n} seconds"
sleep "$n"
VM_IP="$(sudo virsh -q domifaddr smoketest | awk '{print $4}' | cut -d/ -f1)"
if [ -n "$VM_IP" ]; then
break
fi
done
echo "VM_IP=$VM_IP" >> "$GITHUB_ENV"
- name: Run smoke test
run: |
socat UNIX-LISTEN:./apf.sock,fork "TCP:${VM_IP}:15150" &
sleep 1
kata-agent-ctl connect --server-address unix://./apf.sock --cmd CreateSandbox
Loading