Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/kafka] Kafka helm chart - unable to access externally using VIP #30661

Open
duyhustvn opened this issue Nov 28, 2024 · 1 comment
Open
Assignees
Labels
kafka tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@duyhustvn
Copy link

duyhustvn commented Nov 28, 2024

Name and Version

bitnami/kafka 28.3.0

What architecture are you using?

amd64

What steps will reproduce the bug?

I deploy Kafka helm chart https://artifacthub.io/packages/helm/bitnami/kafka/28.3.0 into my k8s cluster with 3 nodes.
I also setup an VIP (10.255.251.9) using keepalived.
To allow external system to connect to the Kafka cluster, I set the loadBalancerIps to the VIP

externalAccess:
  enabled: true
  autoDiscovery:
    enabled: false
  controller:
    service:
      type: LoadBalancer
      ports:
        external: 9095
      loadBalancerIPs:
        - 10.255.251.9
        - 10.255.251.9
        - 10.255.251.9
  broker:
    service:
      type: LoadBalancer
    ports:
      external: 9095
    loadBalancerIPs:
      - 10.255.251.9
      - 10.255.251.9
      - 10.255.251.9

List of kafka pod

Name                 Ready   Status
kafka-controller-0   1/1     Running  
kafka-controller-1   1/1     Running  
kafka-controller-2   1/1     Running  

List of kafka service

267556

I log into all the kafka pod the check the server.config.

listeners=CLIENT://:9092,INTERNAL://:9094,EXTERNAL://:9095,CONTROLLER://:9093
advertised.listeners=CLIENT://kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092,INTERNAL://kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9094,EXTERNAL://10.255.251.9:9095
listener.security.protocol.map=CLIENT:SASL_PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT

Network Policy


Name:         kafka
Namespace:    default
Created on:   2024-11-28 07:47:48 +0000 UTC
Labels:       app.kubernetes.io/instance=kafka
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=kafka
              app.kubernetes.io/version=3.7.0
              helm.sh/chart=kafka-28.3.0
              release=kafka
              service=kafka
Annotations:  meta.helm.sh/release-name: kafka
              meta.helm.sh/release-namespace: default
Spec:
  PodSelector:     app.kubernetes.io/instance=kafka,app.kubernetes.io/name=kafka
  Allowing ingress traffic:
    To Port: 9092/TCP
    To Port: 9094/TCP
    To Port: 9093/TCP
    To Port: 9095/TCP
    From: <any> (traffic not restricted by source)
  Allowing egress traffic:
    To Port: <any> (traffic allowed to all ports)
    To: <any> (traffic not restricted by destination)
  Policy Types: Ingress, Egress

Here is my full config file

image:
  registry: docker.io
  repository: bitnami/kafka
  tag: 3.7.0-debian-12-r6
  debug: true
containerSecurityContext:
  allowPrivilegeEscalation: false
commonLabels:
  service: kafka
  release: kafka
listeners:
  client:
    containerPort: 9092
    protocol: SASL_PLAINTEXT
    name: CLIENT
    sslClientAuth: none
  controller:
    name: CONTROLLER
    containerPort: 9093
    protocol: SASL_PLAINTEXT
    sslClientAuth: none
  interbroker:
    containerPort: 9094
    protocol: SASL_PLAINTEXT
    name: INTERNAL
    sslClientAuth: none
  external:
    containerPort: 9095
    protocol: SASL_PLAINTEXT
    name: EXTERNAL
    sslClientAuth: none
sasl:
  enabledMechanisms: PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
  interBrokerMechanism: PLAIN
  controllerMechanism: PLAIN
  interbroker:
    user: inter_broker_user
  controller:
    user: controller_user
  client:
    users:
      - user
  existingSecret: kafka-credentials
kraft:
  enabled: true
allowPlaintextListener: false
controller:
  replicaCount: 3

externalAccess:
  enabled: true
  autoDiscovery:
    enabled: false
  controller:
    service:
      type: LoadBalancer
      ports:
        external: 9095
      loadBalancerIPs:
        - 10.255.251.9
        - 10.255.251.9
        - 10.255.251.9
  broker:
    service:
      type: LoadBalancer
    ports:
      external: 9095
    loadBalancerIPs:
      - 10.255.251.9
      - 10.255.251.9
      - 10.255.251.9
networkPolicy:
  enable: true
  allowExternal: true
  allowExternalEgress: true
serviceAccount:
  create: true
rbac:
  create: true
readinessProbe:
  enabled: true
livenessProbe:
  enabled: true
persistence:
  enabled: true
  size: 10Gi
  annotations:
    helm.sh/resource-policy: keep

Are you using any custom parameters or values?

No response

What is the expected behavior?

I want to connect to kafka cluster from external system through VIP

What do you see instead?

I can connect to kafka cluster using ip node:9095 but cannot connect to kafka cluster using vip:9095 (10.255.251.9:9095)

Additional information

No response

@duyhustvn duyhustvn added the tech-issues The user has a technical issue about an application label Nov 28, 2024
@github-actions github-actions bot added the triage Triage is needed label Nov 28, 2024
@javsalgar javsalgar changed the title Kafka helm chart - unable to access externally using VIP [bitnami/kafka] Kafka helm chart - unable to access externally using VIP Nov 29, 2024
@javsalgar
Copy link
Contributor

Hi!

There's something that is not clear to me. In principle, the use of externalAccess is to have a separate IP address per Kafka pod. However, you are using the same IP for all the pods, which could likely cause a collision inside Kubernetes. Could you try using the externalAccess.autoDiscovery.enabled=true value so you don't have to set the array of LoadBalancerIPs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kafka tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

2 participants