-
Notifications
You must be signed in to change notification settings - Fork 701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error upgrading to 1.29.x with external CA #3055
Comments
There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:
Please see the group list for a listing of the SIGs, working groups, and committees available. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/transfer kubeadm |
looks like this is something we did not cover with e2e tests workaround: is it an option for you to temporary copy the "ca.key" to the node where 'kubeadm upgrade apply" is called? |
this function call migrates the admin.conf on the node to not have a super user "system:masters", and generates a new super-admin.conf file with the super user. we could skip this process for external CA users, then later when they renew manually "admin.conf" they would be picking a user they want. only 1.29 is affected as 1.30 removed this function. it's a one release patch (migration) solution. |
fix for 1.29.next (5) is here: |
Thank you for the prompt response. We'll wait for the next release, since the intermedate ca key is sealed on our vault and can't be extracted. We should issue a new temporary intermediate CA with an external private key for each of cluster and it's not straightforward since our root CA is airgapped 😀 again thank you for the fix |
e2e addition for the upgrade scenario |
fixed in 1.29.5 |
What happened?
Our clusters, currently at 1.28.9, are configured with external CA (no ca.key on filesystems) and all certificates are generated by an external system.
During the upgrade from 1.28.9 to 1.29.4 with the following command
kubeadm --kubeconfig /root/.kube/config --certificate-renewal=false upgrade apply v1.29.4
we get the following error
the CA files do not exist, please run
kubeadm init phase certs ca
to generate it: failed to load key: couldn't load the private key file /etc/kubernetes/pki/ca.key: open /etc/kubernetes/pki/ca.key: no such file or directory[upgrade/postupgrade] FATAL post-upgrade error
the /root/.kube/config is an external config file with super admin short lived certificates
After a bit of digging, I found this
https://github.com/kubernetes/kubernetes/blob/d138c022d7fb3436add1c97b07004cf10319fb42/cmd/kubeadm/app/phases/upgrade/postupgrade.go#L75
It seems it's not possible to upgrade to 1.29 with an external CA.
What did you expect to happen?
upgrade a cluster to 1.29 with an external CA.
How can we reproduce it (as minimally and precisely as possible)?
try to upgrade a cluster without ca.key inside pki folder
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: