New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PV build up with Reclaim policy set Delete #2266
Comments
Hello! Thank you for filing an issue. The maintainers will triage your issue shortly. In the meantime, please take a look at the troubleshooting guide for bug reports. If this is a feature request, please review our contribution guidelines. |
I am hitting the same bug. It began after my transition from the built-in EBS provisioner to the EBS CSI provisioner. For example, using dynamically allocated PV/PVC with a StorageClass that looks like this works correctly (PVs don't build up forever): apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false However, a dynamically allocated PV/PVC with a StorageClass that looks like this builds up PVs: apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
parameters:
csi.storage.k8s.io/fstype: xfs
encrypted: "true"
type: gp3
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false |
We are hitting the same bug. We're currently testing a solution. If it keeps working good after a couple of days I will make a pr. For those who wants to test it as well, I have a custom image for the version v.0.26.7 in dockerhub Currently under testing |
Checks
Controller Version
0.27.0
Helm Chart Version
0.22.0
CertManager Version
1.10.1
Deployment Method
Helm
cert-manager installation
Yes I have installed cert manager following the steps mentioned in documentation.
Checks
Resource Definitions
To Reproduce
Describe the bug
Dynamically provisioned Persistent Volumes that are in an available state are unable to cleaned up by EBS CSI with the error that the volume is still attached to the node.
Example log :
delete "pvc-df682ae3-3b7b-4599-bdce-e9b17dda2a7a": volume deletion failed: persistentvolume pvc-df682ae3-3b7b-4599-bdce-e9b17dda2a7a is still attached to node ip-10-10-2-152.eu-central-1.compute.internal
.Describe the expected behavior
Dynamically provisioned persistent volumes with ReclaimPolicy set to Delete should be deleted when PVC is deleted.
Whole Controller Logs
Whole Runner Pod Logs
Additional Context
I suspect the issue is because of pending finalizer [kubernetes.io/pv-protection] on the PV. Deleting the Persistent volumes in Kubernetes does not delete the AWS EBS volumes.
The text was updated successfully, but these errors were encountered: