Skip to content

gVisor Setup

gVisor provides kernel-level syscall interception for defense-in-depth container sandboxing. It is recommended but optional.

  • Kubernetes cluster with node access (SSH or shell)
  • kubectl configured to access the cluster
  • Root access on cluster nodes for installation

For clusters without gVisor (e.g., k0s):

Terminal window
# SSH to each node and run:
sudo bash deploy/gvisor/install.sh

This installs runsc (the gVisor binary) and containerd-shim-runsc-v1, then configures containerd to use gVisor as a runtime handler.

Terminal window
kubectl apply -f deploy/gvisor/runtimeclass.yaml

This creates a Kubernetes RuntimeClass named gvisor with handler runsc.

Terminal window
kubectl apply -f deploy/gvisor/test-pod.yaml
kubectl logs gvisor-test

The test pod runs dmesg — if gVisor is active, you’ll see gVisor kernel messages instead of the host kernel’s.

Terminal window
kubectl delete pod gvisor-test

gVisor is enabled by default during deployment:

Terminal window
tntc deploy my-tentacle # uses gVisor by default
tntc deploy my-tentacle --runtime-class "" # deploy without gVisor

Run tntc cluster check to validate the gVisor RuntimeClass exists. Missing gVisor is a warning, not a hard failure — tentacles will still deploy but without kernel-level sandboxing.

Terminal window
tntc cluster check
SymptomCauseFix
Pod stuck in ContainerCreatinggVisor not installed on nodeRun install.sh on the node
RuntimeClass "gvisor" not foundRuntimeClass not appliedRun kubectl apply -f deploy/gvisor/runtimeclass.yaml
cluster check warns about gVisorRuntimeClass missingApply the RuntimeClass or deploy with --runtime-class ""
Performance degradationgVisor syscall overheadExpected; gVisor adds ~5-15% overhead for security

See Security for details on how gVisor fits into the five-layer defense-in-depth model.