You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ rke2 -v
rke2 version v1.31.3+rke2r1 (f1db1f8266ab7315ff447c8acdaefa2ba16b87c0)
go version go1.22.8 X:boringcrypto
Node(s) CPU architecture, OS, and Version: amd64, Ubuntu 24.04
$ uname -a
Linux rke2-vr-test-pool1-59xn2-l78wk 6.8.0-52-generic #53-Ubuntu SMP PREEMPT_DYNAMIC Sat Jan 11 00:06:25 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration: 1 node with all roles
Describe the bug:
After implementing the CIS benchmark for Ubuntu 24.04, kube-proxy fails to start with the following error message
# /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml logs -n kube-system kube-proxy-vr-test-rke2-pool1-d4qsp-mgtqq
I0128 14:56:27.243749 1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["10.211.99.134"]
I0128 14:56:27.259763 1 iptables.go:221] "Error checking iptables version, assuming version at least" version="1.4.11" err="exit status 1"
I0128 14:56:27.288684 1 iptables.go:221] "Error checking iptables version, assuming version at least" version="1.4.11" err="exit status 1"
E0128 14:56:27.331343 1 server.go:556] "Error running ProxyServer" err="iptables is not available on this host"
E0128 14:56:27.331398 1 run.go:74] "command failed" err="iptables is not available on this host"
and AppArmor shows the following
# aa-notify -s 1 -v
Using log file /var/log/audit/audit.log
Profile: busybox
Operation: open
Name: /etc/ld.so.cache
Denied: r
Logfile: /var/log/audit/audit.log
Profile: busybox
Operation: getattr
Name: /lib64/
Denied: r
Logfile: /var/log/audit/audit.log
Profile: busybox
Operation: open
Name: /usr/lib64/libcrypt.so.1.1.0
Denied: r
Logfile: /var/log/audit/audit.log
AppArmor denials: 4540 (since Mon Jan 27 15:02:52 2025)
I don't see how this is something we can fix on our side. As you noted, setting the profile to Unconfined does not fix the issue. It sounds like this needs to be addressed on the Ubuntu side with updates to their apparmor profiles and CLI tools?
You might also retest this on the latest RKE2 release, we are no longer using bci-busybox as the base image for kube-proxy. Ref: rancher/image-build-kubernetes#75
Environmental Info:
RKE2 Version: v1.31.3+rke2r1
Node(s) CPU architecture, OS, and Version: amd64, Ubuntu 24.04
Cluster Configuration: 1 node with all roles
Describe the bug:
After implementing the CIS benchmark for Ubuntu 24.04, kube-proxy fails to start with the following error message
and AppArmor shows the following
Disabling the busybox rule fixes the issue, e.g.
Steps To Reproduce:
aa-enforce /etc/apparmor.d/*
as per the CIS guidance due to https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/2078467 / https://gitlab.com/apparmor/apparmor/-/issues/411#note_2249675987. The workaround I used to achieve this isExpected behavior:
Because the rke2 quick start mentions AppArmor I expected RKE2 to to work with the default Ubuntu AppArmor config.
Actual behavior:
the kube-proxy container can't start until the
busybox
AppArmor rule is disabledAdditional context / logs:
I did try adding
to the static pod definition's SecurityContext at
/var/lib/rancher/rke2/agent/pod-manifests/kube-proxy.yaml
but it didn't seem to fix the issue.The text was updated successfully, but these errors were encountered: