Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster members in AKS helm deploy being mixed or polluted with members in other cluster #20936

Open
jeansergegagnon opened this issue Mar 29, 2024 · 0 comments

Comments

@jeansergegagnon
Copy link

Overview of the Issue

We are running consul in AKS with multiple clusters each with consul 3 node servers with hundreds of members in the members list.

We noticed recently that two different AKS clusters were sharing their members list which we do not want. each cluster should only have the members from that cluster.

We did ensure we have firewalls between the two AKS clusters (both are in same network but have unique subnets) to block any consul ports but within 6 hours of tearing down consul (helm uninstall + kubectl delete ns) and re-deploying it, the other cluster's members list came back.

Reproduction Steps

We don't really know how to reproduce it or why it's happening.

Consul info for both Client and Server

build:
        prerelease =
        revision = 2c56447e
        version = 1.11.1

Operating system and Environment details

AKS 1.27 clusters

If this is just how consul works, please guide me to where the documentation on how to prevent cross cluster data leakage - we tried blocking all the consul ports (8600 8500 8501 8502 8503 8300 8301 8302) - are we missing something?

I was not able to find anything about preventing data sharing between clusters - just how to do the opposite, i.e. sharing across DCs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant