You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NOTE: An update takes your MongoDB(®) replicaset offline if the Arbiter is enabled and the number of MongoDB(®) replicas is two. Helm applies updates to the StatefulSets for the MongoDB(®) instance and the Arbiter at the same time so you lose two out of three quorum votes.
This isn't strictly true. Helm does update both at the same time, however Kubernetes disrupts both the cluster and the arbiter at the same time because the pod disruption budgets are configured to allow it to.
What is the feature you are proposing to solve the problem?
Rather than installing separate pdbs for the mongo cluster and the arbiter, install just a single pdb with selector labels that cover both. Kubernetes will then only allow a single pod of the set of three to go offline at once, solving the issue stated in the notes.
What alternatives have you considered?
Currently I've disabled pdb creation in the chart and I've deployed a custom pdb to cover both the replicas and the arbiter.
The text was updated successfully, but these errors were encountered:
Thank you for bringing this issue to our attention. We appreciate your involvement! If you're interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.
Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.
Name and Version
bitnami/mongodb 6.3.0
What is the problem this feature will solve?
The notes for the chart state the following:
This isn't strictly true. Helm does update both at the same time, however Kubernetes disrupts both the cluster and the arbiter at the same time because the pod disruption budgets are configured to allow it to.
What is the feature you are proposing to solve the problem?
Rather than installing separate pdbs for the mongo cluster and the arbiter, install just a single pdb with selector labels that cover both. Kubernetes will then only allow a single pod of the set of three to go offline at once, solving the issue stated in the notes.
What alternatives have you considered?
Currently I've disabled pdb creation in the chart and I've deployed a custom pdb to cover both the replicas and the arbiter.
The text was updated successfully, but these errors were encountered: