Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I bind an ILB with a load-balancer through private link service? #5265

Open
zadigus opened this issue Jan 10, 2024 · 4 comments
Open
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@zadigus
Copy link

zadigus commented Jan 10, 2024

What would you like to be added:

I am using the following annotations when I deploy my nginx ingress controller on my AKS Cluster:

controller:
  service:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      service.beta.kubernetes.io/azure-pls-create: "true"
      service.beta.kubernetes.io/azure-pls-proxy-protocol: "false"
      service.beta.kubernetes.io/azure-pls-visibility: "*"
    externalTrafficPolicy: Local

After that, I create myself my own private endpoint that links with the private link service created. And that works perfectly fine.

Now, I am creating two AKS Clusters in the same Azure region for the sake of redundancy. Looking at the documentation of how to create a private link service, I see that it is possible to define so-called "Outbound settings", where we can input a load-balancer. Is there already a way to achieve that with your annotations? or in another way? I was not able to find anything on your documentation related to this use-case, but I might've overseen it.

Also, unfortunately, once a private link service has been created, I can't seem to be able to add outbound settings (e.g. from the azure portal). Hence it seems like the outbound settings should be defined during PLS creation.

Why is this needed:

We want to deploy multiple AKS clusters within the same region for the sake of redundancy. All those clusters would be load-balanced by a private load-balancer. So essentially, each AKS cluster produces its own internal load balancer, associated with a private link service. We would like to load-balance those internal load-balancers.

@zadigus zadigus added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 10, 2024
@phealy
Copy link

phealy commented Mar 6, 2024

When you create the PLS via annotations on the LoadBalancer service, we automatically bind the private link service against the front-end IP of the LB Service in question. You cannot use this support to create a PLS for a different IP that's not hosted in a Kubernetes service.

You can add other annotations like service.beta.kubernetes.io/azure-pls-proxy-protocol: true to turn on things that are controlled in private link service outbound settings, like PROXY protocol.

@zadigus
Copy link
Author

zadigus commented Mar 7, 2024

Maybe my original post wasn't clear. I am not trying to create a PLS for a different IP that's not hosted in an AKS. I am trying to put up two different private AKS in the same azure region, and I want to load-balance them. The two AKS are created with the available annotations. The problem is that there does not seem to be a way to then load-balance the two AKS clusters. For example, I could have an API Management instance that routes requests to a private load-balancer that would then load-balance the two AKS Clusters. That amounts to have a private load-balancer in front of both ILBs in front of the workers of both AKS clusters. That is not easy to do because the private endpoint to the private load-balancer needs to be in the same VNet as the private endpoints to the ILBs. Hence I asked if it was possible to get the private endpoint of the ILB to be located in another VNet than that of the AKS Cluster. That's what currently is not supported (or seems to be unsupported). Another way would be to deploy both AKS clusters to the same VNet, in which case the problem would be trivially solved, but I don't like that solution, as my networks are not well-separated between the two AKS Clusters.

However, my problem can be solved by spreading my AKS nodes to various availability zones, which is far easier. I am not sure my original problem really is a relevant use-case in view of the existence of the availability zones. What I want to achieve is higher availability and higher reliability.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 5, 2024
@phealy
Copy link

phealy commented Jun 5, 2024

@zadigus you are correct, there's no way to put the private endpoints behind ILB at this time. If you're specifically wanting to put it behind APIM, though... they recently GA'd support for load-balanced backend pools that would allow you to create your private endpoints to your AKS cluster and just put multiple PE IPs into a backend pool inside APIM itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants