Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kyverno 1.12.6 not running properly in vcluster 0.21.0-beta.2 #2244

Open
dee0sap opened this issue Oct 24, 2024 · 3 comments
Open

kyverno 1.12.6 not running properly in vcluster 0.21.0-beta.2 #2244

dee0sap opened this issue Oct 24, 2024 · 3 comments
Labels

Comments

@dee0sap
Copy link

dee0sap commented Oct 24, 2024

What happened?

I ran
helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace -f scripts/kyverno-overrides.yaml
to install kyverno in the vcluster.
The admission-controller pod fails to start. I believe the problem is that is unable to list configmaps.

What did you expect to happen?

I expect the kyverno deployments to run without issue

How can we reproduce it (as minimally and precisely as possible)?

I believe creating a vcluster and deploying kyverno is all that is required to recreate the problem.

Anything else we need to know?

  • I will attach logs and yaml for the k8s objects I believe are related.
  • In order to see the logs for the admission controller I had to use kubectl logs against the physical pod. With kyverno in a bad state a number of commands were failing when run against the vcluster
  • I removed kyverno-resource-mutating-webhook-cfg and kyverno-resource-validating-webhook-cfg from the virtual cluster because with kyverno being in a bad state the associated services were non-responsive and this in turn caused the k8s api server to fail too many commands. ( Including pod create, see below )
  • I created a pod that used bitnami/kubectl image and the kyverno admission controller service account. In a shell session in the pod kubectl get -A configmap didn't have a problem. Also kubectl auth whoami confirmed as was running as the admission controller service account.

Host cluster Kubernetes version

kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:40:17Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.5", GitCommit:"74e84a90c725047b1328ff3d589fedb1cb7a120e", GitTreeState:"clean", BuildDate:"2024-09-12T00:11:55Z", GoVersion:"go1.22.6", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.26) and server (1.30) exceeds the supported minor version skew of +/-1

vcluster version

vcluster version 
vcluster version 0.21.0-beta.2

VCluster Config

cat > $VCLUSTER_CONFIG <<EOF
sync:
  toHost:
    priorityClasses:
      enabled: true
controlPlane:
  distro:
    k8s:
      enabled: true
  backingStore:
    etcd:
      deploy:
        enabled: true    
    database:
      embedded:
        enabled: false  
  proxy:
    extraSANs:
    - $VCLUSTER_API_SERVER
  statefulSet:
    scheduling:
      podManagementPolicy: OrderedReady
experimental:
  deploy:
    vcluster:
      manifests: |-
$(kubectl get priorityclass -o=yaml | yq '.items[] | select( .globalDefault == true )' | sed 's/^/        /' )
        ---
$(kubectl get -n default secret dockersecret -o=yaml | sed 's/^/        /' )    
EOF
@dee0sap
Copy link
Author

dee0sap commented Oct 24, 2024

files0.tar.gz

@dee0sap
Copy link
Author

dee0sap commented Oct 24, 2024

Another observation, I after removing the problematic webhooks ( see original description ), a lease that seemed to be problematic, and then performing a rolling restart of the admission-controller I saw a different error in the admission-controller log

2024-10-24T15:46:56Z	INFO	webhooks.server	logging/log.go:184	2024/10/24 15:46:56 http: TLS handshake error from 10.250.0.135:58636: secret "kyverno-svc.kyverno.svc.kyverno-tls-pair" not found

Checking the secrets in the vcluster it does appear to be missing. I haven't checked to see if it was missing from the very beginning or not

kubectl get -A secret 
NAMESPACE   NAME                                                      TYPE                             DATA   AGE
default     dockersecret                                              kubernetes.io/dockerconfigjson   1      11h
kyverno     kyverno-cleanup-controller.kyverno.svc.kyverno-tls-ca     kubernetes.io/tls                2      11h
kyverno     kyverno-cleanup-controller.kyverno.svc.kyverno-tls-pair   kubernetes.io/tls                2      29m
kyverno     kyverno-svc.kyverno.svc.kyverno-tls-ca                    kubernetes.io/tls                2      11h
kyverno     sh.helm.release.v1.kyverno.v1                             helm.sh/release.v1               1      11h

@dee0
Copy link

dee0 commented Nov 26, 2024

Using kyvenro 1.13.1 and getting rid of my kyverno setting overrides resulted in a working deploy of kyverno in vcluster.
Note the version change by itself wasn't sufficient and that I originally added the overrides in an attempt to get the old version to work.

Btw, I just have a couple of simple polices that I am trying to utilize in the vcluster and they seem to be working. So I can say not only did the pods start but that they seem to be doing their job.

Feel free to close this ticket if you want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants