-
Notifications
You must be signed in to change notification settings - Fork 383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace managed syncer with deployed syncer #881
Conversation
3c1df58
to
6fa5da4
Compare
bfbb96d
to
2614df2
Compare
35aa497
to
af5702c
Compare
Co-authored-by: Joaquim Moreno <[email protected]>
63aed72
to
e8dbbb6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review part 1 - everything but e2e changes
@@ -23,7 +23,7 @@ rules: | |||
resources: | |||
- namespaces | |||
verbs: | |||
- create | |||
- "*" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why star? What is missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to create
, list
and watch
which are required. delete
will also be required when the syncer gains the ability to respond to deletion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add a TODO (and a security+timebomb issue) that the syncer has to get rid of namespace permission. These mean basically admin access to the pcluster.
We have had some conversation before that we will need some kind of privilege separation. Something to think about: would project+projectrequest be enough in an OpenShift cluster to enable syncer namespace creation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Issues that touch on this already:
- Move pcluster namespace creation/deletion up into kcp layers #387
- https://github.com/kcp-dev/kcp/issues/276
I think there needs to be a namespace agent separate from the syncer that is deployed to each cluster and is responsible for creating/updating/deleting pcluster namespaces and managing the role for each syncer deployed to the pcluster.
test/e2e/framework/fixture.go
Outdated
// TODO(marun) Only use a fake workload server if a pcluster is not configured | ||
parentClusterName, ok := wsClusterName.Parent() | ||
require.True(t, ok, "%s does not have a parent", wsClusterName) | ||
downstreamServer := NewFakeWorkloadServer(t, upstream, parentClusterName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this confuses me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate here in a longer comment what you are doing with this fake workload servers? Is it just to test the cli plugin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's to test the syncer without requiring a kind cluster. This isn't new code - we've been using it for some time now to test syncers - it's just being refactored. In #909 it becomes an optional path that is used when a pcluster (e.g. kind) isn't configured for the test run with --pcluster-kubeconfig
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, PTAL |
@marun FYI e2es are failing |
The plugin can be called via "kubectl kcp" or since kcp-dev#881 also via "kubectl workloads" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
The plugin can be called via "kubectl kcp workspaces" or since kcp-dev#881 also via "kubectl workspaces" - when calling in the latter context it's confusing to get usage info referring to the kcp plugin.
Previously the syncer was managed by kcp or tested in-process, but there was no easy path for syncer deployment to a workload cluster and no testing of this configuration. This PR intends to:
Follow ups: