-
Notifications
You must be signed in to change notification settings - Fork 425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove projected service account volume for workload identity #4682
Remove projected service account volume for workload identity #4682
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @JRBANCEL! |
Hi @JRBANCEL. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
Since we don't currently install the webhooks in our e2e tests, I'd expect the existing Workload Identity test to fail: /test pull-cluster-api-provider-azure-e2e-optional Introducing a requirement to install the webhooks would be a breaking change. Is there a way we could instead annotate the CAPZ pod so that the webhook ignores it? That kind of change I'd feel okay about backporting now and would at least buy us some time to solicit feedback on requiring the WorkloadIdentity webhooks and planning a transition if we find that's a better long-term solution. |
Is using Workload Identity on AKS without the WI webhook even supported?
For #4681, no. Because we need the WI magic to be done on the host cluster (via webhook) and not at the source (inside vCluster). |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #4682 +/- ##
==========================================
+ Coverage 62.71% 62.73% +0.01%
==========================================
Files 192 192
Lines 15641 15641
==========================================
+ Hits 9810 9812 +2
+ Misses 5148 5146 -2
Partials 683 683 ☔ View full report in Codecov by Sentry. |
@sonasingh46 do share your thoughts 🙏 |
@JRBANCEL @mjnovice This change may work for vcluster scenarios, but it will break existing WI use cases in non-vcluster scenarios. What are you doing right now to make this work? Are you manually updating the capz-controller-manager |
@jackfrancis we are currently using kyverno to do tylhe patching on the host cluster. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
@JRBANCEL: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind bug
/kind cleanup
What this PR does / why we need it:
See #4681.
Currently, in the manager spec, the projected volume for the service account token is manually specified.
The Workload Identity webhook takes care of this (and more) as seen here.
It is better to rely on the webhook because it is more future proof and removes YAML lines. Not only this, but as explained in #4681, not relying on the webhook makes it impossible to use
cluster-api-provider-azure
inside a vCluster on AKS with Workload Identity.With this change, the resulting Pod spec in a vanilla case (i.e. not inside vCluster) is completely identical (webhook sets the volume), and it also works in vCluster because the Workload Identity webhook (on the host cluster) will do what is needed.
Which issue(s) this PR fixes:
Fixes #4681
TODOs:
Release note: