Question 10: Container Runtime Sandbox gVisor

Problem Statement

Solve this question on: ssh cks7262

Team purple wants to run some of their workloads more secure. Worker node cks7262-node2 has containerd already configured to support the runsc/gvisor runtime.

Tasks:

  1. Connect to the worker node using ssh cks7262-node2 from cks7262
  2. Create a RuntimeClass named gvisor with handler runsc
  3. Create a Pod that uses the RuntimeClass. The Pod should be in Namespace team-purple, named gvisor-test and of image nginx:1.27.1. Ensure the Pod runs on cks7262-node2
  4. Write the output of the dmesg command of the successfully started Pod into /opt/course/10/gvisor-test-dmesg on cks7262

Solution

Step 1: Verify Node Configuration

First, let's check the nodes and their container runtimes:

➜ ssh cks7262

➜ candidate@cks7262:~$ k get node
NAME                     STATUS   ROLES              ... CONTAINER-RUNTIME
cks7262                  Ready    control-plane      ... containerd://1.7.12
cks7262-node1            Ready                 ... containerd://1.7.12
cks7262-node2            Ready                 ... containerd://1.7.12

Let's verify that cks7262-node2 has containerd configured with runsc/gvisor:

➜ candidate@cks7262:~$ ssh cks7262-node2

➜ candidate@cks7262-node2:~# runsc --version
runsc version release-20240820.0
spec: 1.1.0-rc.1

➜ candidate@cks7262-node2:~# cat /etc/containerd/config.toml | grep runsc
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
    runtime_type = "io.containerd.runsc.v1"
Step 2: Create RuntimeClass

Create a RuntimeClass for gVisor:

➜ candidate@cks7262:~$ vim 10_rtc.yaml
# 10_rtc.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: gvisor
handler: runsc

➜ candidate@cks7262:~$ k -f 10_rtc.yaml create
Step 3: Create Pod with RuntimeClass

Create a Pod that uses the gVisor runtime:

➜ candidate@cks7262:~$ k -n team-purple run gvisor-test --image=nginx:1.27.1 --dry-run=client -o yaml > 10_pod.yaml

➜ candidate@cks7262:~$ vim 10_pod.yaml
# 10_pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: gvisor-test
  name: gvisor-test
  namespace: team-purple
spec:
  nodeName: cks7262-node2 # add
  runtimeClassName: gvisor   # add
  containers:
  - image: nginx:1.27.1
    name: gvisor-test
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

➜ candidate@cks7262:~$ k -f 10_pod.yaml create

Let's verify the Pod is running and using gVisor:

➜ candidate@cks7262:~$ k -n team-purple get pod gvisor-test
NAME          READY   STATUS    RESTARTS   AGE
gvisor-test   1/1     Running   0          30s

➜ candidate@cks7262:~$ k -n team-purple exec gvisor-test -- dmesg
[    0.000000] Starting gVisor...
[    0.336731] Waiting for children...
[    0.807396] Rewriting operating system in Javascript...
[    0.838661] Committing treasure map to memory...
[    1.082234] Adversarially training Redcode AI...
[    1.452222] Synthesizing system calls...
[    1.751229] Daemonizing children...
[    2.198949] Verifying that no non-zero bytes made their way into /dev/zero...
[    2.381878] Singleplexing /dev/ptmx...
[    2.398376] Checking naughty and nice process list...
[    2.544323] Creating cloned children...
[    3.010573] Setting up VFS...
[    3.467349] Setting up FUSE...
[    3.738725] Ready!
Step 4: Save dmesg Output

Finally, let's save the dmesg output to the required location:

➜ candidate@cks7262:~$ k -n team-purple exec gvisor-test -- dmesg > /opt/course/10/gvisor-test-dmesg
The gVisor runtime has been successfully implemented:
  • The RuntimeClass has been created with the runsc handler
  • The Pod has been created with the gVisor runtime
  • The Pod is running on the correct node
  • The dmesg output has been saved for verification
Back to Questions List