-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adopt Helm chart to mimic Amazon EKS CoreDNS naming conventions #97
Comments
Sample values.yaml for the proposed change: # placement
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values:
- kube-dns
topologyKey: kubernetes.io/hostname
weight: 100
nodeSelector:
system-component: enabled
tolerations:
- key: system-component
operator: Equal
value: enabled
effect: NoSchedule
podDisruptionBudget:
maxUnavailable: 1
# use for coredns and autoscaler deployment
priorityClassName: system-cluster-critical
pdb:
name: "coredns"
autoscaler:
# Enabled the cluster-proportional-autoscaler
enabled: true
# special values from custome Helm chart
namePrefix: "coredns"
# placement
nodeSelector:
system-component: enabled
tolerations:
- key: system-component
operator: Equal
value: enabled
effect: NoSchedule
service:
name: "kube-dns"
namespace: kube-system
serviceAccount:
create: true
name: "coredns"
namespace: kube-system
deployment:
enabled: true
name: "coredns"
namespace: kube-system
# kube-prometheus-stack already comes with a working ServiceMonitor "kube-prometheus-stack-coredns" per default
configmap:
name: "coredns"
clusterrole:
name: "system:coredns"
clusterrolebinding:
name: "system:coredns"
servicemonitor:
name: "coredns"
configmapAutoscaler:
namePrefix: "coredns"
serviceaccountAutoscaler:
namePrefix: "coredns"
clusterroleAutoscaler:
namePrefix: "coredns"
clusterrolebinding-autoscaler:
name-prefix: "coredns"
serviceaccountAutoscaler:
namePrefix: "coredns"
servicemetrics:
namePrefix: "coredns" |
Hi, I'm not sure I understand the use case here. I went over the PR and while I have some comments there, I want to make sure the use case is aligned to what this chart supports as it adds some complexity, before reviewing it in detail |
Hi, as stated in the very beginning here I 'd like to use this Helm chart to deploy CoreDNS resources with the same naming used by AWS supplied CoreDNS in EKS. I want to have this because EKS add-on for CoreDNS does not yet support customization of all relevant attributes (like tolerations) so I believe using this Helm chart is a more complete solution. |
So the idea is to overcome the EKS add-on limitation by self managing the core dns installation? In that case why do the names matter? Or is the idea to annotate the EKS coredns resources with the helm annotations and adopt the EKS add-on as a helm release of the coredns chart? |
@hagaibarel Exactly like you mentioned in your second sentence: I apply Helm related labels/annotations to current EKS CoreDNS related resources and then adopt the Helm chart. |
Thanks for clarifying the use case. I'd like to put this on hold for now as I want to move #103 forward first and it will affect the |
@hagaibarel any idea if this feature might be enabled in the future? I am also running into a similar scenario where I would like EKS CoreDNS names to match so I can adopt the helmchart for self-management |
I propose to add additional resources to Helm chart to be able to mimic naming conventions of Amazon EKS CoreDNS implementation to quickly move between self-managed CoreDNS, Helm managed CoreDNS and EKS managed ad-on.
Especially the following ressources need name resource
configmap: "coredns"
clusterrole: "system:coredns"
clusterrolebinding": "system:coredns"
service: "kube-dns"
In addition all other resources should allow naming without release name prefix.
I already created a fork of the repo, done all the necessary changes and currently testing it on an Amazon EKS cluster.
Please let me know if a corresponding PR has a chance of being considered.
The text was updated successfully, but these errors were encountered: