Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid Credentials with NodePort #30622

Open
rshap91 opened this issue Nov 25, 2024 · 3 comments
Open

Invalid Credentials with NodePort #30622

rshap91 opened this issue Nov 25, 2024 · 3 comments
Assignees
Labels
kafka tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@rshap91
Copy link

rshap91 commented Nov 25, 2024

Name and Version

bitnami/kafka 31.0.0

What architecture are you using?

arm64

What steps will reproduce the bug?

  1. Running locally on apple macbook pro M1 with kubernetes on docker desktop.
$ kubectl version
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.2

$ docker version
Client:
 Version:           27.0.3
 API version:       1.46
 Go version:        go1.21.11
 Git commit:        7d4bcd8
 Built:             Fri Jun 28 23:59:41 2024
 OS/Arch:           darwin/arm64
 Context:           desktop-linux

Server: Docker Desktop 4.32.0 (157355)
 Engine:
  Version:          27.0.3
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.21.11
  Git commit:       662f78c
  Built:            Sat Jun 29 00:02:44 2024
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.7.18
  GitCommit:        ae71819c4f5e67bb4d5ae76a6b735f29cc25774e
 runc:
  Version:          1.7.18
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  1. Created file local.yaml:
externalAccess:
  enabled: true
  controller:
    service:
      type: NodePort
      useHostIPs: true
      nodePorts:
        - 30001
        - 30002
        - 30003
  1. To reproduce:
>> helm repo add bitnami https://charts.bitnami.com/bitnami
>> helm pull --untar bitnami/kafka
>> cd kafka

move the local.yaml file to ./values/local.yaml

Run helm install -f values/local.yaml kafka .

The following message was printed to stdout

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.default.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-controller-0.kafka-controller-headless.default.svc.cluster.local:9092
    kafka-controller-1.kafka-controller-headless.default.svc.cluster.local:9092
    kafka-controller-2.kafka-controller-headless.default.svc.cluster.local:9092

The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
    - SASL authentication

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.9.0-debian-12-r1 --namespace default --command -- sleep infinity
    kubectl cp --namespace default /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace default -- bash

    PRODUCER:
        kafka-console-producer.sh \
            --producer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
...
  1. following the above steps I run
>> cat << EOF > client.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="$(kubectl get secret kafka-user-passwords --namespace default -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
EOF

>> kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.9.0-debian-12-r1 --namespace default --command -- sleep infinity

>> kubectl cp --namespace default client.properties kafka-client:/tmp/client.properties

>> kubectl exec --tty -i kafka-client --namespace default -- bash

>> kafka-console-consumer.sh \
            --consumer.config /tmp/client.properties \
            --bootstrap-server kafka.default.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

This returns an authentication error.

Are you using any custom parameters or values?

externalAccess:
  enabled: true
  controller:
    service:
      type: NodePort
      useHostIPs: true
      nodePorts:
        - 30001
        - 30002
        - 30003

What is the expected behavior?

I expect it to successfully connect to the kafka cluster as a consumer

What do you see instead?

ERROR [Consumer clientId=console-consumer, groupId=console-consumer-91963] Connection to node -1 (kafka.default.svc.cluster.local/10.98.182.229:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2024-11-25 18:47:13,287] WARN [Consumer clientId=console-consumer, groupId=console-consumer-91963] Bootstrap broker kafka.default.svc.cluster.local:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2024-11-25 18:47:13,288] ERROR Error processing message, terminating consumer process:  (org.apache.kafka.tools.consumer.ConsoleConsumer)
org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256
Processed a total of 0 messages

Additional information

No response

@rshap91 rshap91 added the tech-issues The user has a technical issue about an application label Nov 25, 2024
@github-actions github-actions bot added the triage Triage is needed label Nov 25, 2024
@rshap91
Copy link
Author

rshap91 commented Nov 25, 2024

It does seem to work when I use a Sassl Plain mechanism. It appears the scram password is not being set for the user.

When I update the config entry for the user it works:

  1. change client.properties to use SASL PLAIN
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
    username="user1" \
    password="ABC123";
  1. change scram password for user1

/usr/local/kafka/bin/kafka-configs.sh --bootstrap-server 127.0.0.1:30001 --command-config /usr/local/kafka/config/client.properties --alter --entity-type users --entity-name user1 --add-config 'SCRAM-SHA-256=[password=ABC123]

  1. change client properties back to SCRAM
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="user1" \
    password="rxyCBCiyr6"

Now authentication succeeds

@carrodher
Copy link
Member

Thank you for bringing this issue to our attention. We appreciate your involvement! If you're interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.

Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.

@rshap91
Copy link
Author

rshap91 commented Nov 26, 2024

Thanks @carrodher, my issue appears to be with the persistent volume claim for the data dir. When I run helm install, the entrypoint of the container initializes the kraft storage and sets the credentials in the data directory.

When I run helm uninstall the PVC does not get deleted (I believe by design). When I re-install the chart, new credentials are populated within the kubernetes secret, but the kafka-storage.sh does not re-format the data directory since it is already formatted. Therefore when I set the client.properties, it's set using a new password and when kafka compares it to its stored scram password they do not match.

I'm guessing this is the desired behavior? If so you can close this issue - Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kafka tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

2 participants